00:00:00.001  Started by upstream project "autotest-per-patch" build number 132845
00:00:00.001  originally caused by:
00:00:00.001   Started by user sys_sgci
00:00:00.014  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.015  The recommended git tool is: git
00:00:00.015  using credential 00000000-0000-0000-0000-000000000002
00:00:00.017   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.033  Fetching changes from the remote Git repository
00:00:00.035   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.051  Using shallow fetch with depth 1
00:00:00.051  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.051   > git --version # timeout=10
00:00:00.075   > git --version # 'git version 2.39.2'
00:00:00.075  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.104  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.104   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:02.365   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:02.377   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:02.389  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:02.389   > git config core.sparsecheckout # timeout=10
00:00:02.400   > git read-tree -mu HEAD # timeout=10
00:00:02.436   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:02.472  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:02.472   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:02.559  [Pipeline] Start of Pipeline
00:00:02.570  [Pipeline] library
00:00:02.571  Loading library shm_lib@master
00:00:02.571  Library shm_lib@master is cached. Copying from home.
00:00:02.586  [Pipeline] node
00:00:02.594  Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu24-vg-autotest
00:00:02.595  [Pipeline] {
00:00:02.603  [Pipeline] catchError
00:00:02.604  [Pipeline] {
00:00:02.612  [Pipeline] wrap
00:00:02.616  [Pipeline] {
00:00:02.620  [Pipeline] stage
00:00:02.621  [Pipeline] { (Prologue)
00:00:02.631  [Pipeline] echo
00:00:02.632  Node: VM-host-SM4
00:00:02.636  [Pipeline] cleanWs
00:00:02.643  [WS-CLEANUP] Deleting project workspace...
00:00:02.643  [WS-CLEANUP] Deferred wipeout is used...
00:00:02.648  [WS-CLEANUP] done
00:00:02.838  [Pipeline] setCustomBuildProperty
00:00:02.952  [Pipeline] httpRequest
00:00:03.263  [Pipeline] echo
00:00:03.265  Sorcerer 10.211.164.20 is alive
00:00:03.273  [Pipeline] retry
00:00:03.275  [Pipeline] {
00:00:03.287  [Pipeline] httpRequest
00:00:03.291  HttpMethod: GET
00:00:03.291  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:03.291  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:03.292  Response Code: HTTP/1.1 200 OK
00:00:03.293  Success: Status code 200 is in the accepted range: 200,404
00:00:03.293  Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:03.439  [Pipeline] }
00:00:03.455  [Pipeline] // retry
00:00:03.462  [Pipeline] sh
00:00:03.738  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:03.751  [Pipeline] httpRequest
00:00:04.280  [Pipeline] echo
00:00:04.281  Sorcerer 10.211.164.20 is alive
00:00:04.291  [Pipeline] retry
00:00:04.293  [Pipeline] {
00:00:04.307  [Pipeline] httpRequest
00:00:04.311  HttpMethod: GET
00:00:04.311  URL: http://10.211.164.20/packages/spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz
00:00:04.312  Sending request to url: http://10.211.164.20/packages/spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz
00:00:04.313  Response Code: HTTP/1.1 200 OK
00:00:04.313  Success: Status code 200 is in the accepted range: 200,404
00:00:04.314  Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz
00:01:11.215  [Pipeline] }
00:01:11.234  [Pipeline] // retry
00:01:11.243  [Pipeline] sh
00:01:11.524  + tar --no-same-owner -xf spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz
00:01:14.824  [Pipeline] sh
00:01:15.180  + git -C spdk log --oneline -n5
00:01:15.180  3aefe4228 mk/spdk.common.mk Use pattern substitution instead of prefix removal
00:01:15.180  2104eacf0 test/check_so_deps: use VERSION to look for prior tags
00:01:15.180  66289a6db build: use VERSION file for storing version
00:01:15.180  626389917 nvme/rdma: Don't limit max_sge if UMR is used
00:01:15.180  cec5ba284 nvme/rdma: Register UMR per IO request
00:01:15.200  [Pipeline] writeFile
00:01:15.217  [Pipeline] sh
00:01:15.590  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:15.602  [Pipeline] sh
00:01:15.887  + cat autorun-spdk.conf
00:01:15.887  SPDK_TEST_UNITTEST=1
00:01:15.887  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:15.887  SPDK_TEST_NVME=1
00:01:15.887  SPDK_TEST_BLOCKDEV=1
00:01:15.887  SPDK_RUN_ASAN=1
00:01:15.887  SPDK_RUN_UBSAN=1
00:01:15.887  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:15.895  RUN_NIGHTLY=0
00:01:15.897  [Pipeline] }
00:01:15.912  [Pipeline] // stage
00:01:15.934  [Pipeline] stage
00:01:15.936  [Pipeline] { (Run VM)
00:01:15.950  [Pipeline] sh
00:01:16.235  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:16.235  + echo 'Start stage prepare_nvme.sh'
00:01:16.235  Start stage prepare_nvme.sh
00:01:16.235  + [[ -n 4 ]]
00:01:16.235  + disk_prefix=ex4
00:01:16.235  + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]]
00:01:16.235  + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]]
00:01:16.235  + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf
00:01:16.235  ++ SPDK_TEST_UNITTEST=1
00:01:16.235  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:16.235  ++ SPDK_TEST_NVME=1
00:01:16.235  ++ SPDK_TEST_BLOCKDEV=1
00:01:16.235  ++ SPDK_RUN_ASAN=1
00:01:16.235  ++ SPDK_RUN_UBSAN=1
00:01:16.235  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:16.235  ++ RUN_NIGHTLY=0
00:01:16.235  + cd /var/jenkins/workspace/ubuntu24-vg-autotest
00:01:16.235  + nvme_files=()
00:01:16.235  + declare -A nvme_files
00:01:16.235  + backend_dir=/var/lib/libvirt/images/backends
00:01:16.235  + nvme_files['nvme.img']=5G
00:01:16.235  + nvme_files['nvme-cmb.img']=5G
00:01:16.235  + nvme_files['nvme-multi0.img']=4G
00:01:16.235  + nvme_files['nvme-multi1.img']=4G
00:01:16.235  + nvme_files['nvme-multi2.img']=4G
00:01:16.235  + nvme_files['nvme-openstack.img']=8G
00:01:16.235  + nvme_files['nvme-zns.img']=5G
00:01:16.235  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:16.235  + ((  SPDK_TEST_FTL == 1  ))
00:01:16.235  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:16.235  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:16.235  + for nvme in "${!nvme_files[@]}"
00:01:16.235  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G
00:01:16.235  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:16.235  + for nvme in "${!nvme_files[@]}"
00:01:16.235  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G
00:01:16.235  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:16.235  + for nvme in "${!nvme_files[@]}"
00:01:16.235  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G
00:01:16.235  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:16.235  + for nvme in "${!nvme_files[@]}"
00:01:16.235  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G
00:01:16.235  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:16.235  + for nvme in "${!nvme_files[@]}"
00:01:16.235  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G
00:01:16.235  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:16.495  + for nvme in "${!nvme_files[@]}"
00:01:16.495  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G
00:01:16.495  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:16.495  + for nvme in "${!nvme_files[@]}"
00:01:16.495  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G
00:01:16.495  Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:16.495  ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu
00:01:16.754  + echo 'End stage prepare_nvme.sh'
00:01:16.754  End stage prepare_nvme.sh
00:01:16.767  [Pipeline] sh
00:01:17.052  + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:17.052  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2404
00:01:17.052  
00:01:17.052  DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant
00:01:17.052  SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk
00:01:17.052  VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest
00:01:17.052  HELP=0
00:01:17.052  DRY_RUN=0
00:01:17.052  NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,
00:01:17.052  NVME_DISKS_TYPE=nvme,
00:01:17.052  NVME_AUTO_CREATE=0
00:01:17.052  NVME_DISKS_NAMESPACES=,
00:01:17.052  NVME_CMB=,
00:01:17.052  NVME_PMR=,
00:01:17.052  NVME_ZNS=,
00:01:17.052  NVME_MS=,
00:01:17.052  NVME_FDP=,
00:01:17.052  SPDK_VAGRANT_DISTRO=ubuntu2404
00:01:17.052  SPDK_VAGRANT_VMCPU=10
00:01:17.052  SPDK_VAGRANT_VMRAM=12288
00:01:17.052  SPDK_VAGRANT_PROVIDER=libvirt
00:01:17.052  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:01:17.052  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:17.052  SPDK_OPENSTACK_NETWORK=0
00:01:17.052  VAGRANT_PACKAGE_BOX=0
00:01:17.052  VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:17.052  FORCE_DISTRO=true
00:01:17.052  VAGRANT_BOX_VERSION=
00:01:17.052  EXTRA_VAGRANTFILES=
00:01:17.052  NIC_MODEL=e1000
00:01:17.052  
00:01:17.052  mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt'
00:01:17.052  /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest
00:01:20.339  Bringing machine 'default' up with 'libvirt' provider...
00:01:20.339  ==> default: Creating image (snapshot of base box volume).
00:01:20.598  ==> default: Creating domain with the following settings...
00:01:20.598  ==> default:  -- Name:              ubuntu2404-24.04-1720510786-2314_default_1733924283_0db7b7ee1ad8e4a49da9
00:01:20.598  ==> default:  -- Domain type:       kvm
00:01:20.598  ==> default:  -- Cpus:              10
00:01:20.598  ==> default:  -- Feature:           acpi
00:01:20.598  ==> default:  -- Feature:           apic
00:01:20.598  ==> default:  -- Feature:           pae
00:01:20.598  ==> default:  -- Memory:            12288M
00:01:20.598  ==> default:  -- Memory Backing:    hugepages: 
00:01:20.598  ==> default:  -- Management MAC:    
00:01:20.598  ==> default:  -- Loader:            
00:01:20.598  ==> default:  -- Nvram:             
00:01:20.598  ==> default:  -- Base box:          spdk/ubuntu2404
00:01:20.598  ==> default:  -- Storage pool:      default
00:01:20.598  ==> default:  -- Image:             /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1733924283_0db7b7ee1ad8e4a49da9.img (20G)
00:01:20.598  ==> default:  -- Volume Cache:      default
00:01:20.598  ==> default:  -- Kernel:            
00:01:20.598  ==> default:  -- Initrd:            
00:01:20.598  ==> default:  -- Graphics Type:     vnc
00:01:20.598  ==> default:  -- Graphics Port:     -1
00:01:20.598  ==> default:  -- Graphics IP:       127.0.0.1
00:01:20.598  ==> default:  -- Graphics Password: Not defined
00:01:20.598  ==> default:  -- Video Type:        cirrus
00:01:20.598  ==> default:  -- Video VRAM:        9216
00:01:20.598  ==> default:  -- Sound Type:	
00:01:20.598  ==> default:  -- Keymap:            en-us
00:01:20.598  ==> default:  -- TPM Path:          
00:01:20.598  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:20.598  ==> default:  -- Command line args: 
00:01:20.598  ==> default:     -> value=-device, 
00:01:20.598  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:01:20.598  ==> default:     -> value=-drive, 
00:01:20.598  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 
00:01:20.598  ==> default:     -> value=-device, 
00:01:20.598  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:20.598  ==> default: Creating shared folders metadata...
00:01:20.598  ==> default: Starting domain.
00:01:22.502  ==> default: Waiting for domain to get an IP address...
00:01:32.544  ==> default: Waiting for SSH to become available...
00:01:34.447  ==> default: Configuring and enabling network interfaces...
00:01:39.712  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:01:45.042  ==> default: Mounting SSHFS shared folder...
00:01:45.609  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output
00:01:45.609  ==> default: Checking Mount..
00:01:46.542  ==> default: Folder Successfully Mounted!
00:01:46.542  ==> default: Running provisioner: file...
00:01:46.801      default: ~/.gitconfig => .gitconfig
00:01:47.061  
00:01:47.061    SUCCESS!
00:01:47.061  
00:01:47.061    cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use.
00:01:47.061    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:47.061    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm.
00:01:47.061  
00:01:47.070  [Pipeline] }
00:01:47.084  [Pipeline] // stage
00:01:47.093  [Pipeline] dir
00:01:47.093  Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt
00:01:47.095  [Pipeline] {
00:01:47.107  [Pipeline] catchError
00:01:47.108  [Pipeline] {
00:01:47.119  [Pipeline] sh
00:01:47.398  + vagrant ssh-config --host vagrant
00:01:47.398  + sed -ne /^Host/,$p
00:01:47.398  + tee ssh_conf
00:01:50.722  Host vagrant
00:01:50.722    HostName 192.168.121.69
00:01:50.722    User vagrant
00:01:50.722    Port 22
00:01:50.722    UserKnownHostsFile /dev/null
00:01:50.722    StrictHostKeyChecking no
00:01:50.722    PasswordAuthentication no
00:01:50.722    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404
00:01:50.722    IdentitiesOnly yes
00:01:50.722    LogLevel FATAL
00:01:50.722    ForwardAgent yes
00:01:50.722    ForwardX11 yes
00:01:50.722  
00:01:50.735  [Pipeline] withEnv
00:01:50.737  [Pipeline] {
00:01:50.751  [Pipeline] sh
00:01:51.030  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:01:51.030  		source /etc/os-release
00:01:51.030  		[[ -e /image.version ]] && img=$(< /image.version)
00:01:51.030  		# Minimal, systemd-like check.
00:01:51.030  		if [[ -e /.dockerenv ]]; then
00:01:51.030  			# Clear garbage from the node's name:
00:01:51.030  			#  agt-er_autotest_547-896 -> autotest_547-896
00:01:51.030  			#  $HOSTNAME is the actual container id
00:01:51.030  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:01:51.030  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:01:51.030  				# We can assume this is a mount from a host where container is running,
00:01:51.030  				# so fetch its hostname to easily identify the target swarm worker.
00:01:51.030  				container="$(< /etc/hostname) ($agent)"
00:01:51.030  			else
00:01:51.030  				# Fallback
00:01:51.030  				container=$agent
00:01:51.030  			fi
00:01:51.030  		fi
00:01:51.030  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:01:51.030  
00:01:51.298  [Pipeline] }
00:01:51.313  [Pipeline] // withEnv
00:01:51.321  [Pipeline] setCustomBuildProperty
00:01:51.335  [Pipeline] stage
00:01:51.337  [Pipeline] { (Tests)
00:01:51.353  [Pipeline] sh
00:01:51.631  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:01:51.901  [Pipeline] sh
00:01:52.181  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:01:52.454  [Pipeline] timeout
00:01:52.455  Timeout set to expire in 1 hr 30 min
00:01:52.456  [Pipeline] {
00:01:52.470  [Pipeline] sh
00:01:52.817  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:01:53.385  HEAD is now at 3aefe4228 mk/spdk.common.mk Use pattern substitution instead of prefix removal
00:01:53.398  [Pipeline] sh
00:01:53.678  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:01:53.951  [Pipeline] sh
00:01:54.232  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:01:54.504  [Pipeline] sh
00:01:54.782  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo
00:01:55.041  ++ readlink -f spdk_repo
00:01:55.041  + DIR_ROOT=/home/vagrant/spdk_repo
00:01:55.041  + [[ -n /home/vagrant/spdk_repo ]]
00:01:55.041  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:01:55.041  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:01:55.041  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:01:55.041  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:01:55.041  + [[ -d /home/vagrant/spdk_repo/output ]]
00:01:55.041  + [[ ubuntu24-vg-autotest == pkgdep-* ]]
00:01:55.041  + cd /home/vagrant/spdk_repo
00:01:55.041  + source /etc/os-release
00:01:55.041  ++ PRETTY_NAME='Ubuntu 24.04 LTS'
00:01:55.041  ++ NAME=Ubuntu
00:01:55.041  ++ VERSION_ID=24.04
00:01:55.041  ++ VERSION='24.04 LTS (Noble Numbat)'
00:01:55.041  ++ VERSION_CODENAME=noble
00:01:55.041  ++ ID=ubuntu
00:01:55.041  ++ ID_LIKE=debian
00:01:55.041  ++ HOME_URL=https://www.ubuntu.com/
00:01:55.041  ++ SUPPORT_URL=https://help.ubuntu.com/
00:01:55.041  ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
00:01:55.041  ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
00:01:55.041  ++ UBUNTU_CODENAME=noble
00:01:55.041  ++ LOGO=ubuntu-logo
00:01:55.041  + uname -a
00:01:55.041  Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
00:01:55.041  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:01:55.299  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:01:55.299  Hugepages
00:01:55.299  node     hugesize     free /  total
00:01:55.299  node0   1048576kB        0 /      0
00:01:55.299  node0      2048kB        0 /      0
00:01:55.299  
00:01:55.299  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:55.299  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:01:55.559  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:01:55.559  + rm -f /tmp/spdk-ld-path
00:01:55.559  + source autorun-spdk.conf
00:01:55.559  ++ SPDK_TEST_UNITTEST=1
00:01:55.559  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:55.559  ++ SPDK_TEST_NVME=1
00:01:55.559  ++ SPDK_TEST_BLOCKDEV=1
00:01:55.559  ++ SPDK_RUN_ASAN=1
00:01:55.559  ++ SPDK_RUN_UBSAN=1
00:01:55.559  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:55.559  ++ RUN_NIGHTLY=0
00:01:55.559  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:55.559  + [[ -n '' ]]
00:01:55.559  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:01:55.559  + for M in /var/spdk/build-*-manifest.txt
00:01:55.559  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:55.559  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:01:55.559  + for M in /var/spdk/build-*-manifest.txt
00:01:55.559  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:55.559  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:01:55.559  ++ uname
00:01:55.559  + [[ Linux == \L\i\n\u\x ]]
00:01:55.559  + sudo dmesg -T
00:01:55.559  + sudo dmesg --clear
00:01:55.559  + dmesg_pid=2382
00:01:55.559  + [[ Ubuntu == FreeBSD ]]
00:01:55.559  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:55.559  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:55.559  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:55.559  + sudo dmesg -Tw
00:01:55.559  + [[ -x /usr/src/fio-static/fio ]]
00:01:55.559  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:55.559  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:55.559  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:55.559  + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64)
00:01:55.559  + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:55.559  + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:55.559  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:55.559  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:55.559    13:38:37  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:55.559   13:38:37  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_TEST_UNITTEST=1
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME=1
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_BLOCKDEV=1
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:55.559    13:38:37  -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=0
00:01:55.559   13:38:37  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:01:55.559   13:38:37  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:55.818     13:38:38  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:55.818    13:38:38  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:01:55.818     13:38:38  -- scripts/common.sh@15 -- $ shopt -s extglob
00:01:55.818     13:38:38  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:55.818     13:38:38  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:55.818     13:38:38  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:55.818      13:38:38  -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:55.818      13:38:38  -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:55.818      13:38:38  -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:55.818      13:38:38  -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:55.818      13:38:38  -- paths/export.sh@6 -- $ export PATH
00:01:55.818      13:38:38  -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:55.818    13:38:38  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:01:55.818      13:38:38  -- common/autobuild_common.sh@493 -- $ date +%s
00:01:55.818     13:38:38  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733924318.XXXXXX
00:01:55.818    13:38:38  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733924318.YvTOav
00:01:55.818    13:38:38  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:01:55.818    13:38:38  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:01:55.818    13:38:38  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:01:55.818    13:38:38  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:01:55.818    13:38:38  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:01:55.818     13:38:38  -- common/autobuild_common.sh@509 -- $ get_config_params
00:01:55.818     13:38:38  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:01:55.818     13:38:38  -- common/autotest_common.sh@10 -- $ set +x
00:01:55.818    13:38:38  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk'
00:01:55.818    13:38:38  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:01:55.818    13:38:38  -- pm/common@17 -- $ local monitor
00:01:55.818    13:38:38  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:55.818    13:38:38  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:55.818    13:38:38  -- pm/common@25 -- $ sleep 1
00:01:55.818     13:38:38  -- pm/common@21 -- $ date +%s
00:01:55.818     13:38:38  -- pm/common@21 -- $ date +%s
00:01:55.818    13:38:38  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733924318
00:01:55.818    13:38:38  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733924318
00:01:55.818  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733924318_collect-cpu-load.pm.log
00:01:55.818  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733924318_collect-vmstat.pm.log
00:01:56.755    13:38:39  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:01:56.755   13:38:39  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:56.755   13:38:39  -- spdk/autobuild.sh@12 -- $ umask 022
00:01:56.755   13:38:39  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:01:56.755   13:38:39  -- spdk/autobuild.sh@16 -- $ date -u
00:01:56.755  Wed Dec 11 13:38:39 UTC 2024
00:01:56.755   13:38:39  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:56.755  v25.01-rc1-1-g3aefe4228
00:01:56.755   13:38:39  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:56.755   13:38:39  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:56.755   13:38:39  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:56.755   13:38:39  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:56.755   13:38:39  -- common/autotest_common.sh@10 -- $ set +x
00:01:56.755  ************************************
00:01:56.755  START TEST asan
00:01:56.755  ************************************
00:01:56.755  using asan
00:01:56.755   13:38:39 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:01:56.755  
00:01:56.755  real	0m0.000s
00:01:56.755  user	0m0.000s
00:01:56.755  sys	0m0.000s
00:01:56.755   13:38:39 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:56.755  ************************************
00:01:56.755   13:38:39 asan -- common/autotest_common.sh@10 -- $ set +x
00:01:56.755  END TEST asan
00:01:56.755  ************************************
00:01:56.755   13:38:39  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:56.755   13:38:39  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:56.755   13:38:39  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:56.755   13:38:39  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:56.755   13:38:39  -- common/autotest_common.sh@10 -- $ set +x
00:01:56.755  ************************************
00:01:56.755  START TEST ubsan
00:01:56.755  ************************************
00:01:56.755  using ubsan
00:01:56.755   13:38:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:01:56.755  
00:01:56.755  real	0m0.000s
00:01:56.755  user	0m0.000s
00:01:56.755  sys	0m0.000s
00:01:56.755   13:38:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:56.755   13:38:39 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:01:56.755  ************************************
00:01:56.755  END TEST ubsan
00:01:56.755  ************************************
00:01:56.755   13:38:39  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:01:56.755   13:38:39  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:01:56.755   13:38:39  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:01:56.755   13:38:39  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:56.755   13:38:39  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:56.755   13:38:39  -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]]
00:01:56.755   13:38:39  -- spdk/autobuild.sh@58 -- $ unittest_build
00:01:56.755   13:38:39  -- common/autobuild_common.sh@433 -- $ run_test unittest_build _unittest_build
00:01:56.755   13:38:39  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:01:56.755   13:38:39  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:56.755   13:38:39  -- common/autotest_common.sh@10 -- $ set +x
00:01:57.015  ************************************
00:01:57.015  START TEST unittest_build
00:01:57.015  ************************************
00:01:57.015   13:38:39 unittest_build -- common/autotest_common.sh@1129 -- $ _unittest_build
00:01:57.015   13:38:39 unittest_build -- common/autobuild_common.sh@424 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --without-shared
00:01:57.015  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:01:57.015  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:01:57.582  Using 'verbs' RDMA provider
00:02:14.010  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:02:28.894  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:02:28.894  Creating mk/config.mk...done.
00:02:28.894  Creating mk/cc.flags.mk...done.
00:02:28.894  Type 'make' to build.
00:02:28.894   13:39:09 unittest_build -- common/autobuild_common.sh@425 -- $ make -j10
00:02:43.760  The Meson build system
00:02:43.760  Version: 1.4.1
00:02:43.760  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:02:43.760  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:02:43.760  Build type: native build
00:02:43.760  Program cat found: YES (/usr/bin/cat)
00:02:43.760  Project name: DPDK
00:02:43.760  Project version: 24.03.0
00:02:43.760  C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0")
00:02:43.760  C linker for the host machine: cc ld.bfd 2.42
00:02:43.760  Host machine cpu family: x86_64
00:02:43.760  Host machine cpu: x86_64
00:02:43.760  Message: ## Building in Developer Mode ##
00:02:43.760  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:43.760  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:02:43.760  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:43.760  Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3)
00:02:43.760  Program cat found: YES (/usr/bin/cat)
00:02:43.760  Compiler for C supports arguments -march=native: YES 
00:02:43.760  Checking for size of "void *" : 8 
00:02:43.760  Checking for size of "void *" : 8 (cached)
00:02:43.760  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:02:43.760  Library m found: YES
00:02:43.760  Library numa found: YES
00:02:43.760  Has header "numaif.h" : YES 
00:02:43.760  Library fdt found: NO
00:02:43.760  Library execinfo found: NO
00:02:43.760  Has header "execinfo.h" : YES 
00:02:43.760  Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1
00:02:43.760  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:43.760  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:43.760  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:43.760  Run-time dependency openssl found: YES 3.0.13
00:02:43.760  Run-time dependency libpcap found: NO (tried pkgconfig)
00:02:43.760  Library pcap found: NO
00:02:43.760  Compiler for C supports arguments -Wcast-qual: YES 
00:02:43.760  Compiler for C supports arguments -Wdeprecated: YES 
00:02:43.760  Compiler for C supports arguments -Wformat: YES 
00:02:43.760  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:02:43.760  Compiler for C supports arguments -Wformat-security: YES 
00:02:43.760  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:43.760  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:43.760  Compiler for C supports arguments -Wnested-externs: YES 
00:02:43.760  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:43.760  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:43.760  Compiler for C supports arguments -Wsign-compare: YES 
00:02:43.760  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:43.760  Compiler for C supports arguments -Wundef: YES 
00:02:43.760  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:43.761  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:43.761  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:43.761  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:43.761  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:43.761  Program objdump found: YES (/usr/bin/objdump)
00:02:43.761  Compiler for C supports arguments -mavx512f: YES 
00:02:43.761  Checking if "AVX512 checking" compiles: YES 
00:02:43.761  Fetching value of define "__SSE4_2__" : 1 
00:02:43.761  Fetching value of define "__AES__" : 1 
00:02:43.761  Fetching value of define "__AVX__" : 1 
00:02:43.761  Fetching value of define "__AVX2__" : 1 
00:02:43.761  Fetching value of define "__AVX512BW__" : 1 
00:02:43.761  Fetching value of define "__AVX512CD__" : 1 
00:02:43.761  Fetching value of define "__AVX512DQ__" : 1 
00:02:43.761  Fetching value of define "__AVX512F__" : 1 
00:02:43.761  Fetching value of define "__AVX512VL__" : 1 
00:02:43.761  Fetching value of define "__PCLMUL__" : 1 
00:02:43.761  Fetching value of define "__RDRND__" : 1 
00:02:43.761  Fetching value of define "__RDSEED__" : 1 
00:02:43.761  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:02:43.761  Fetching value of define "__znver1__" : (undefined) 
00:02:43.761  Fetching value of define "__znver2__" : (undefined) 
00:02:43.761  Fetching value of define "__znver3__" : (undefined) 
00:02:43.761  Fetching value of define "__znver4__" : (undefined) 
00:02:43.761  Library asan found: YES
00:02:43.761  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:43.761  Message: lib/log: Defining dependency "log"
00:02:43.761  Message: lib/kvargs: Defining dependency "kvargs"
00:02:43.761  Message: lib/telemetry: Defining dependency "telemetry"
00:02:43.761  Library rt found: YES
00:02:43.761  Checking for function "getentropy" : NO 
00:02:43.761  Message: lib/eal: Defining dependency "eal"
00:02:43.761  Message: lib/ring: Defining dependency "ring"
00:02:43.761  Message: lib/rcu: Defining dependency "rcu"
00:02:43.761  Message: lib/mempool: Defining dependency "mempool"
00:02:43.761  Message: lib/mbuf: Defining dependency "mbuf"
00:02:43.761  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:43.761  Fetching value of define "__AVX512F__" : 1 (cached)
00:02:43.761  Fetching value of define "__AVX512BW__" : 1 (cached)
00:02:43.761  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:02:43.761  Fetching value of define "__AVX512VL__" : 1 (cached)
00:02:43.761  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:02:43.761  Compiler for C supports arguments -mpclmul: YES 
00:02:43.761  Compiler for C supports arguments -maes: YES 
00:02:43.761  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:43.761  Compiler for C supports arguments -mavx512bw: YES 
00:02:43.761  Compiler for C supports arguments -mavx512dq: YES 
00:02:43.761  Compiler for C supports arguments -mavx512vl: YES 
00:02:43.761  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:43.761  Compiler for C supports arguments -mavx2: YES 
00:02:43.761  Compiler for C supports arguments -mavx: YES 
00:02:43.761  Message: lib/net: Defining dependency "net"
00:02:43.761  Message: lib/meter: Defining dependency "meter"
00:02:43.761  Message: lib/ethdev: Defining dependency "ethdev"
00:02:43.761  Message: lib/pci: Defining dependency "pci"
00:02:43.761  Message: lib/cmdline: Defining dependency "cmdline"
00:02:43.761  Message: lib/hash: Defining dependency "hash"
00:02:43.761  Message: lib/timer: Defining dependency "timer"
00:02:43.761  Message: lib/compressdev: Defining dependency "compressdev"
00:02:43.761  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:43.761  Message: lib/dmadev: Defining dependency "dmadev"
00:02:43.761  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:43.761  Message: lib/power: Defining dependency "power"
00:02:43.761  Message: lib/reorder: Defining dependency "reorder"
00:02:43.761  Message: lib/security: Defining dependency "security"
00:02:43.761  Has header "linux/userfaultfd.h" : YES 
00:02:43.761  Has header "linux/vduse.h" : YES 
00:02:43.761  Message: lib/vhost: Defining dependency "vhost"
00:02:43.761  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:43.761  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:43.761  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:43.761  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:43.761  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:43.761  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:43.761  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:43.761  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:43.761  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:43.761  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:43.761  Program doxygen found: YES (/usr/bin/doxygen)
00:02:43.761  Configuring doxy-api-html.conf using configuration
00:02:43.761  Configuring doxy-api-man.conf using configuration
00:02:43.761  Program mandb found: YES (/usr/bin/mandb)
00:02:43.761  Program sphinx-build found: NO
00:02:43.761  Configuring rte_build_config.h using configuration
00:02:43.761  Message: 
00:02:43.761  =================
00:02:43.761  Applications Enabled
00:02:43.761  =================
00:02:43.761  
00:02:43.761  apps:
00:02:43.761  	
00:02:43.761  
00:02:43.761  Message: 
00:02:43.761  =================
00:02:43.761  Libraries Enabled
00:02:43.761  =================
00:02:43.761  
00:02:43.761  libs:
00:02:43.761  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:43.761  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:43.761  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:43.761  
00:02:43.761  Message: 
00:02:43.761  ===============
00:02:43.761  Drivers Enabled
00:02:43.761  ===============
00:02:43.761  
00:02:43.761  common:
00:02:43.761  	
00:02:43.761  bus:
00:02:43.761  	pci, vdev, 
00:02:43.761  mempool:
00:02:43.761  	ring, 
00:02:43.761  dma:
00:02:43.761  	
00:02:43.761  net:
00:02:43.761  	
00:02:43.761  crypto:
00:02:43.761  	
00:02:43.761  compress:
00:02:43.761  	
00:02:43.761  vdpa:
00:02:43.761  	
00:02:43.761  
00:02:43.761  Message: 
00:02:43.761  =================
00:02:43.761  Content Skipped
00:02:43.761  =================
00:02:43.761  
00:02:43.761  apps:
00:02:43.761  	dumpcap:	explicitly disabled via build config
00:02:43.761  	graph:	explicitly disabled via build config
00:02:43.761  	pdump:	explicitly disabled via build config
00:02:43.761  	proc-info:	explicitly disabled via build config
00:02:43.761  	test-acl:	explicitly disabled via build config
00:02:43.761  	test-bbdev:	explicitly disabled via build config
00:02:43.761  	test-cmdline:	explicitly disabled via build config
00:02:43.761  	test-compress-perf:	explicitly disabled via build config
00:02:43.761  	test-crypto-perf:	explicitly disabled via build config
00:02:43.761  	test-dma-perf:	explicitly disabled via build config
00:02:43.761  	test-eventdev:	explicitly disabled via build config
00:02:43.761  	test-fib:	explicitly disabled via build config
00:02:43.761  	test-flow-perf:	explicitly disabled via build config
00:02:43.761  	test-gpudev:	explicitly disabled via build config
00:02:43.761  	test-mldev:	explicitly disabled via build config
00:02:43.761  	test-pipeline:	explicitly disabled via build config
00:02:43.761  	test-pmd:	explicitly disabled via build config
00:02:43.761  	test-regex:	explicitly disabled via build config
00:02:43.761  	test-sad:	explicitly disabled via build config
00:02:43.761  	test-security-perf:	explicitly disabled via build config
00:02:43.761  	
00:02:43.761  libs:
00:02:43.761  	argparse:	explicitly disabled via build config
00:02:43.761  	metrics:	explicitly disabled via build config
00:02:43.761  	acl:	explicitly disabled via build config
00:02:43.761  	bbdev:	explicitly disabled via build config
00:02:43.761  	bitratestats:	explicitly disabled via build config
00:02:43.761  	bpf:	explicitly disabled via build config
00:02:43.761  	cfgfile:	explicitly disabled via build config
00:02:43.761  	distributor:	explicitly disabled via build config
00:02:43.761  	efd:	explicitly disabled via build config
00:02:43.761  	eventdev:	explicitly disabled via build config
00:02:43.761  	dispatcher:	explicitly disabled via build config
00:02:43.761  	gpudev:	explicitly disabled via build config
00:02:43.761  	gro:	explicitly disabled via build config
00:02:43.761  	gso:	explicitly disabled via build config
00:02:43.761  	ip_frag:	explicitly disabled via build config
00:02:43.761  	jobstats:	explicitly disabled via build config
00:02:43.761  	latencystats:	explicitly disabled via build config
00:02:43.761  	lpm:	explicitly disabled via build config
00:02:43.761  	member:	explicitly disabled via build config
00:02:43.761  	pcapng:	explicitly disabled via build config
00:02:43.761  	rawdev:	explicitly disabled via build config
00:02:43.761  	regexdev:	explicitly disabled via build config
00:02:43.761  	mldev:	explicitly disabled via build config
00:02:43.761  	rib:	explicitly disabled via build config
00:02:43.761  	sched:	explicitly disabled via build config
00:02:43.761  	stack:	explicitly disabled via build config
00:02:43.761  	ipsec:	explicitly disabled via build config
00:02:43.761  	pdcp:	explicitly disabled via build config
00:02:43.761  	fib:	explicitly disabled via build config
00:02:43.761  	port:	explicitly disabled via build config
00:02:43.761  	pdump:	explicitly disabled via build config
00:02:43.761  	table:	explicitly disabled via build config
00:02:43.761  	pipeline:	explicitly disabled via build config
00:02:43.761  	graph:	explicitly disabled via build config
00:02:43.761  	node:	explicitly disabled via build config
00:02:43.761  	
00:02:43.761  drivers:
00:02:43.761  	common/cpt:	not in enabled drivers build config
00:02:43.761  	common/dpaax:	not in enabled drivers build config
00:02:43.761  	common/iavf:	not in enabled drivers build config
00:02:43.761  	common/idpf:	not in enabled drivers build config
00:02:43.761  	common/ionic:	not in enabled drivers build config
00:02:43.761  	common/mvep:	not in enabled drivers build config
00:02:43.761  	common/octeontx:	not in enabled drivers build config
00:02:43.761  	bus/auxiliary:	not in enabled drivers build config
00:02:43.761  	bus/cdx:	not in enabled drivers build config
00:02:43.761  	bus/dpaa:	not in enabled drivers build config
00:02:43.761  	bus/fslmc:	not in enabled drivers build config
00:02:43.761  	bus/ifpga:	not in enabled drivers build config
00:02:43.761  	bus/platform:	not in enabled drivers build config
00:02:43.761  	bus/uacce:	not in enabled drivers build config
00:02:43.761  	bus/vmbus:	not in enabled drivers build config
00:02:43.761  	common/cnxk:	not in enabled drivers build config
00:02:43.762  	common/mlx5:	not in enabled drivers build config
00:02:43.762  	common/nfp:	not in enabled drivers build config
00:02:43.762  	common/nitrox:	not in enabled drivers build config
00:02:43.762  	common/qat:	not in enabled drivers build config
00:02:43.762  	common/sfc_efx:	not in enabled drivers build config
00:02:43.762  	mempool/bucket:	not in enabled drivers build config
00:02:43.762  	mempool/cnxk:	not in enabled drivers build config
00:02:43.762  	mempool/dpaa:	not in enabled drivers build config
00:02:43.762  	mempool/dpaa2:	not in enabled drivers build config
00:02:43.762  	mempool/octeontx:	not in enabled drivers build config
00:02:43.762  	mempool/stack:	not in enabled drivers build config
00:02:43.762  	dma/cnxk:	not in enabled drivers build config
00:02:43.762  	dma/dpaa:	not in enabled drivers build config
00:02:43.762  	dma/dpaa2:	not in enabled drivers build config
00:02:43.762  	dma/hisilicon:	not in enabled drivers build config
00:02:43.762  	dma/idxd:	not in enabled drivers build config
00:02:43.762  	dma/ioat:	not in enabled drivers build config
00:02:43.762  	dma/skeleton:	not in enabled drivers build config
00:02:43.762  	net/af_packet:	not in enabled drivers build config
00:02:43.762  	net/af_xdp:	not in enabled drivers build config
00:02:43.762  	net/ark:	not in enabled drivers build config
00:02:43.762  	net/atlantic:	not in enabled drivers build config
00:02:43.762  	net/avp:	not in enabled drivers build config
00:02:43.762  	net/axgbe:	not in enabled drivers build config
00:02:43.762  	net/bnx2x:	not in enabled drivers build config
00:02:43.762  	net/bnxt:	not in enabled drivers build config
00:02:43.762  	net/bonding:	not in enabled drivers build config
00:02:43.762  	net/cnxk:	not in enabled drivers build config
00:02:43.762  	net/cpfl:	not in enabled drivers build config
00:02:43.762  	net/cxgbe:	not in enabled drivers build config
00:02:43.762  	net/dpaa:	not in enabled drivers build config
00:02:43.762  	net/dpaa2:	not in enabled drivers build config
00:02:43.762  	net/e1000:	not in enabled drivers build config
00:02:43.762  	net/ena:	not in enabled drivers build config
00:02:43.762  	net/enetc:	not in enabled drivers build config
00:02:43.762  	net/enetfec:	not in enabled drivers build config
00:02:43.762  	net/enic:	not in enabled drivers build config
00:02:43.762  	net/failsafe:	not in enabled drivers build config
00:02:43.762  	net/fm10k:	not in enabled drivers build config
00:02:43.762  	net/gve:	not in enabled drivers build config
00:02:43.762  	net/hinic:	not in enabled drivers build config
00:02:43.762  	net/hns3:	not in enabled drivers build config
00:02:43.762  	net/i40e:	not in enabled drivers build config
00:02:43.762  	net/iavf:	not in enabled drivers build config
00:02:43.762  	net/ice:	not in enabled drivers build config
00:02:43.762  	net/idpf:	not in enabled drivers build config
00:02:43.762  	net/igc:	not in enabled drivers build config
00:02:43.762  	net/ionic:	not in enabled drivers build config
00:02:43.762  	net/ipn3ke:	not in enabled drivers build config
00:02:43.762  	net/ixgbe:	not in enabled drivers build config
00:02:43.762  	net/mana:	not in enabled drivers build config
00:02:43.762  	net/memif:	not in enabled drivers build config
00:02:43.762  	net/mlx4:	not in enabled drivers build config
00:02:43.762  	net/mlx5:	not in enabled drivers build config
00:02:43.762  	net/mvneta:	not in enabled drivers build config
00:02:43.762  	net/mvpp2:	not in enabled drivers build config
00:02:43.762  	net/netvsc:	not in enabled drivers build config
00:02:43.762  	net/nfb:	not in enabled drivers build config
00:02:43.762  	net/nfp:	not in enabled drivers build config
00:02:43.762  	net/ngbe:	not in enabled drivers build config
00:02:43.762  	net/null:	not in enabled drivers build config
00:02:43.762  	net/octeontx:	not in enabled drivers build config
00:02:43.762  	net/octeon_ep:	not in enabled drivers build config
00:02:43.762  	net/pcap:	not in enabled drivers build config
00:02:43.762  	net/pfe:	not in enabled drivers build config
00:02:43.762  	net/qede:	not in enabled drivers build config
00:02:43.762  	net/ring:	not in enabled drivers build config
00:02:43.762  	net/sfc:	not in enabled drivers build config
00:02:43.762  	net/softnic:	not in enabled drivers build config
00:02:43.762  	net/tap:	not in enabled drivers build config
00:02:43.762  	net/thunderx:	not in enabled drivers build config
00:02:43.762  	net/txgbe:	not in enabled drivers build config
00:02:43.762  	net/vdev_netvsc:	not in enabled drivers build config
00:02:43.762  	net/vhost:	not in enabled drivers build config
00:02:43.762  	net/virtio:	not in enabled drivers build config
00:02:43.762  	net/vmxnet3:	not in enabled drivers build config
00:02:43.762  	raw/*:	missing internal dependency, "rawdev"
00:02:43.762  	crypto/armv8:	not in enabled drivers build config
00:02:43.762  	crypto/bcmfs:	not in enabled drivers build config
00:02:43.762  	crypto/caam_jr:	not in enabled drivers build config
00:02:43.762  	crypto/ccp:	not in enabled drivers build config
00:02:43.762  	crypto/cnxk:	not in enabled drivers build config
00:02:43.762  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:43.762  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:43.762  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:43.762  	crypto/mlx5:	not in enabled drivers build config
00:02:43.762  	crypto/mvsam:	not in enabled drivers build config
00:02:43.762  	crypto/nitrox:	not in enabled drivers build config
00:02:43.762  	crypto/null:	not in enabled drivers build config
00:02:43.762  	crypto/octeontx:	not in enabled drivers build config
00:02:43.762  	crypto/openssl:	not in enabled drivers build config
00:02:43.762  	crypto/scheduler:	not in enabled drivers build config
00:02:43.762  	crypto/uadk:	not in enabled drivers build config
00:02:43.762  	crypto/virtio:	not in enabled drivers build config
00:02:43.762  	compress/isal:	not in enabled drivers build config
00:02:43.762  	compress/mlx5:	not in enabled drivers build config
00:02:43.762  	compress/nitrox:	not in enabled drivers build config
00:02:43.762  	compress/octeontx:	not in enabled drivers build config
00:02:43.762  	compress/zlib:	not in enabled drivers build config
00:02:43.762  	regex/*:	missing internal dependency, "regexdev"
00:02:43.762  	ml/*:	missing internal dependency, "mldev"
00:02:43.762  	vdpa/ifc:	not in enabled drivers build config
00:02:43.762  	vdpa/mlx5:	not in enabled drivers build config
00:02:43.762  	vdpa/nfp:	not in enabled drivers build config
00:02:43.762  	vdpa/sfc:	not in enabled drivers build config
00:02:43.762  	event/*:	missing internal dependency, "eventdev"
00:02:43.762  	baseband/*:	missing internal dependency, "bbdev"
00:02:43.762  	gpu/*:	missing internal dependency, "gpudev"
00:02:43.762  	
00:02:43.762  
00:02:43.762  Build targets in project: 85
00:02:43.762  
00:02:43.762  DPDK 24.03.0
00:02:43.762  
00:02:43.762    User defined options
00:02:43.762      buildtype          : debug
00:02:43.762      default_library    : static
00:02:43.762      libdir             : lib
00:02:43.762      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:43.762      b_sanitize         : address
00:02:43.762      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:02:43.762      c_link_args        : 
00:02:43.762      cpu_instruction_set: native
00:02:43.762      disable_apps       : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump
00:02:43.762      disable_libs       : mldev,jobstats,bpf,argparse,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec
00:02:43.762      enable_docs        : false
00:02:43.762      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:02:43.762      enable_kmods       : false
00:02:43.762      max_lcores         : 128
00:02:43.762      tests              : false
00:02:43.762  
00:02:43.762  Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja
00:02:44.328  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:02:44.328  [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:44.328  [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:44.328  [3/268] Linking static target lib/librte_kvargs.a
00:02:44.328  [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:44.328  [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:44.328  [6/268] Linking static target lib/librte_log.a
00:02:44.892  [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:44.892  [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:44.892  [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:44.892  [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:44.892  [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:44.892  [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:44.892  [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:45.150  [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:45.150  [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:45.150  [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:45.150  [17/268] Linking static target lib/librte_telemetry.a
00:02:45.150  [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:45.407  [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:45.407  [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:45.665  [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:45.665  [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:45.665  [23/268] Linking target lib/librte_log.so.24.1
00:02:45.923  [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:45.923  [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:45.923  [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:46.180  [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:46.180  [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:02:46.180  [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:46.180  [30/268] Linking target lib/librte_kvargs.so.24.1
00:02:46.180  [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:46.180  [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:46.180  [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:46.180  [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:46.475  [35/268] Linking target lib/librte_telemetry.so.24.1
00:02:46.475  [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:02:46.475  [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:46.732  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:46.732  [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:46.732  [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:02:46.732  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:46.732  [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:46.732  [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:46.990  [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:46.990  [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:46.990  [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:46.990  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:46.990  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:47.248  [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:47.506  [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:47.507  [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:47.507  [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:47.507  [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:47.507  [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:47.507  [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:47.764  [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:47.764  [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:47.764  [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:47.764  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:48.022  [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:48.022  [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:48.280  [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:48.280  [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:48.280  [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:48.280  [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:48.280  [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:48.280  [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:48.537  [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:48.537  [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:48.537  [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:48.794  [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:48.794  [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:48.794  [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:48.794  [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:49.052  [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:49.052  [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:49.309  [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:49.309  [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:49.567  [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:49.567  [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:49.567  [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:49.567  [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:49.567  [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:49.824  [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:49.824  [85/268] Linking static target lib/librte_ring.a
00:02:50.082  [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:50.082  [87/268] Linking static target lib/librte_eal.a
00:02:50.082  [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:50.082  [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:50.340  [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:50.340  [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:50.340  [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:50.340  [93/268] Linking static target lib/librte_rcu.a
00:02:50.340  [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:50.340  [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:50.598  [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:50.598  [97/268] Linking static target lib/librte_mempool.a
00:02:50.598  [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:50.856  [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:50.856  [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:51.113  [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:51.113  [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:51.113  [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:51.113  [104/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:02:51.113  [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:02:51.113  [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:51.113  [107/268] Linking static target lib/librte_net.a
00:02:51.113  [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:51.113  [109/268] Linking static target lib/librte_mbuf.a
00:02:51.371  [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:51.371  [111/268] Linking static target lib/librte_meter.a
00:02:51.629  [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:51.629  [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:51.629  [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:51.886  [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:51.886  [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:51.886  [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:51.886  [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:52.143  [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:52.401  [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:52.401  [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:52.658  [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:52.658  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:52.915  [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:52.915  [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:52.915  [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:52.915  [127/268] Linking static target lib/librte_pci.a
00:02:53.172  [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:53.172  [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:53.172  [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:53.172  [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:53.444  [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:02:53.444  [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:53.444  [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.444  [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:53.444  [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:53.444  [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:53.444  [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:53.703  [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:53.703  [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:53.703  [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:53.703  [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:53.703  [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:53.703  [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:02:53.703  [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:53.961  [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:53.961  [147/268] Linking static target lib/librte_cmdline.a
00:02:54.219  [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:54.219  [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:54.477  [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:02:54.735  [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:54.735  [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:54.735  [153/268] Linking static target lib/librte_timer.a
00:02:54.735  [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:54.735  [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:54.993  [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:54.993  [157/268] Linking static target lib/librte_ethdev.a
00:02:54.993  [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:54.993  [159/268] Linking static target lib/librte_compressdev.a
00:02:55.251  [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:55.251  [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:55.251  [162/268] Linking static target lib/librte_hash.a
00:02:55.508  [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:55.508  [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.508  [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:55.508  [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:55.508  [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:55.508  [168/268] Linking static target lib/librte_dmadev.a
00:02:55.766  [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.766  [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:55.766  [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:56.024  [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:56.283  [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.283  [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:02:56.283  [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:56.283  [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:02:56.541  [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.541  [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.798  [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:56.798  [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:56.798  [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:56.798  [182/268] Linking static target lib/librte_cryptodev.a
00:02:56.798  [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:02:56.798  [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:57.056  [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:57.056  [186/268] Linking static target lib/librte_power.a
00:02:57.313  [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:57.313  [188/268] Linking static target lib/librte_reorder.a
00:02:57.313  [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:57.570  [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:57.570  [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:57.570  [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:57.570  [193/268] Linking static target lib/librte_security.a
00:02:57.829  [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.087  [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:58.346  [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.346  [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.346  [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:02:58.605  [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:58.863  [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:58.863  [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:58.863  [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:02:58.863  [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:59.121  [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:59.121  [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:59.379  [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:59.379  [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:59.379  [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:59.637  [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:59.637  [210/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:59.637  [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:59.637  [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:59.637  [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:59.637  [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:59.637  [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:59.637  [216/268] Linking static target drivers/librte_bus_vdev.a
00:02:59.895  [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:59.895  [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:59.895  [219/268] Linking static target drivers/librte_bus_pci.a
00:02:59.895  [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:59.895  [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:59.895  [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:59.895  [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:00.153  [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:00.153  [225/268] Linking static target drivers/librte_mempool_ring.a
00:03:00.153  [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:00.411  [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:01.375  [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:02.749  [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.749  [230/268] Linking target lib/librte_eal.so.24.1
00:03:03.007  [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:03:03.007  [232/268] Linking target lib/librte_timer.so.24.1
00:03:03.007  [233/268] Linking target lib/librte_meter.so.24.1
00:03:03.007  [234/268] Linking target lib/librte_dmadev.so.24.1
00:03:03.007  [235/268] Linking target drivers/librte_bus_vdev.so.24.1
00:03:03.007  [236/268] Linking target lib/librte_pci.so.24.1
00:03:03.007  [237/268] Linking target lib/librte_ring.so.24.1
00:03:03.265  [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:03:03.265  [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:03:03.265  [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:03:03.265  [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:03:03.265  [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:03:03.265  [243/268] Linking target drivers/librte_bus_pci.so.24.1
00:03:03.265  [244/268] Linking target lib/librte_rcu.so.24.1
00:03:03.523  [245/268] Linking target lib/librte_mempool.so.24.1
00:03:03.523  [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:03:03.523  [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:03:03.523  [248/268] Linking target drivers/librte_mempool_ring.so.24.1
00:03:03.523  [249/268] Linking target lib/librte_mbuf.so.24.1
00:03:03.781  [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:03.781  [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:03:03.781  [252/268] Linking target lib/librte_cryptodev.so.24.1
00:03:03.781  [253/268] Linking target lib/librte_compressdev.so.24.1
00:03:04.039  [254/268] Linking target lib/librte_net.so.24.1
00:03:04.039  [255/268] Linking target lib/librte_reorder.so.24.1
00:03:04.039  [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:03:04.039  [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:03:04.297  [258/268] Linking target lib/librte_hash.so.24.1
00:03:04.297  [259/268] Linking target lib/librte_cmdline.so.24.1
00:03:04.297  [260/268] Linking target lib/librte_security.so.24.1
00:03:04.297  [261/268] Linking target lib/librte_ethdev.so.24.1
00:03:04.297  [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:03:04.555  [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:03:04.555  [264/268] Linking target lib/librte_power.so.24.1
00:03:06.455  [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:06.455  [266/268] Linking static target lib/librte_vhost.a
00:03:07.829  [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:08.087  [268/268] Linking target lib/librte_vhost.so.24.1
00:03:08.087  INFO: autodetecting backend as ninja
00:03:08.087  INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:03:09.461    CC lib/ut_mock/mock.o
00:03:09.461    CC lib/ut/ut.o
00:03:09.461    CC lib/log/log.o
00:03:09.461    CC lib/log/log_deprecated.o
00:03:09.461    CC lib/log/log_flags.o
00:03:09.461    LIB libspdk_ut_mock.a
00:03:09.461    LIB libspdk_ut.a
00:03:09.461    LIB libspdk_log.a
00:03:09.718    CC lib/ioat/ioat.o
00:03:09.718    CXX lib/trace_parser/trace.o
00:03:09.718    CC lib/dma/dma.o
00:03:09.718    CC lib/util/cpuset.o
00:03:09.718    CC lib/util/bit_array.o
00:03:09.718    CC lib/util/base64.o
00:03:09.718    CC lib/util/crc16.o
00:03:09.718    CC lib/util/crc32c.o
00:03:09.718    CC lib/util/crc32.o
00:03:09.976    CC lib/vfio_user/host/vfio_user_pci.o
00:03:09.976    CC lib/util/crc32_ieee.o
00:03:09.976    CC lib/util/crc64.o
00:03:09.976    CC lib/vfio_user/host/vfio_user.o
00:03:10.260    CC lib/util/dif.o
00:03:10.260    CC lib/util/fd.o
00:03:10.260    LIB libspdk_ioat.a
00:03:10.260    CC lib/util/fd_group.o
00:03:10.260    CC lib/util/file.o
00:03:10.260    CC lib/util/hexlify.o
00:03:10.260    LIB libspdk_dma.a
00:03:10.260    CC lib/util/iov.o
00:03:10.260    CC lib/util/math.o
00:03:10.260    CC lib/util/net.o
00:03:10.260    CC lib/util/pipe.o
00:03:10.519    CC lib/util/strerror_tls.o
00:03:10.519    CC lib/util/string.o
00:03:10.519    LIB libspdk_vfio_user.a
00:03:10.519    CC lib/util/uuid.o
00:03:10.519    CC lib/util/xor.o
00:03:10.519    CC lib/util/zipf.o
00:03:10.519    CC lib/util/md5.o
00:03:11.085    LIB libspdk_util.a
00:03:11.343    CC lib/json/json_util.o
00:03:11.343    CC lib/json/json_parse.o
00:03:11.343    CC lib/vmd/vmd.o
00:03:11.343    CC lib/json/json_write.o
00:03:11.343    CC lib/idxd/idxd_user.o
00:03:11.343    CC lib/idxd/idxd.o
00:03:11.343    CC lib/conf/conf.o
00:03:11.343    CC lib/env_dpdk/env.o
00:03:11.343    CC lib/rdma_utils/rdma_utils.o
00:03:11.600    LIB libspdk_trace_parser.a
00:03:11.600    CC lib/env_dpdk/memory.o
00:03:11.859    CC lib/idxd/idxd_kernel.o
00:03:11.859    CC lib/env_dpdk/pci.o
00:03:11.859    LIB libspdk_conf.a
00:03:11.859    CC lib/vmd/led.o
00:03:11.859    CC lib/env_dpdk/init.o
00:03:11.859    LIB libspdk_json.a
00:03:12.117    CC lib/env_dpdk/threads.o
00:03:12.117    LIB libspdk_rdma_utils.a
00:03:12.117    CC lib/env_dpdk/pci_ioat.o
00:03:12.117    CC lib/env_dpdk/pci_virtio.o
00:03:12.374    CC lib/env_dpdk/pci_vmd.o
00:03:12.374    CC lib/env_dpdk/pci_idxd.o
00:03:12.374    CC lib/jsonrpc/jsonrpc_server.o
00:03:12.374    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:12.374    CC lib/jsonrpc/jsonrpc_client.o
00:03:12.633    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:12.633    CC lib/env_dpdk/pci_event.o
00:03:12.633    CC lib/env_dpdk/sigbus_handler.o
00:03:12.633    LIB libspdk_idxd.a
00:03:12.633    LIB libspdk_vmd.a
00:03:12.633    CC lib/rdma_provider/common.o
00:03:12.633    CC lib/rdma_provider/rdma_provider_verbs.o
00:03:12.633    CC lib/env_dpdk/pci_dpdk.o
00:03:12.633    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:12.633    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:12.891    LIB libspdk_jsonrpc.a
00:03:12.891    LIB libspdk_rdma_provider.a
00:03:13.457    CC lib/rpc/rpc.o
00:03:13.457    LIB libspdk_rpc.a
00:03:14.024    CC lib/trace/trace_rpc.o
00:03:14.024    CC lib/trace/trace_flags.o
00:03:14.024    CC lib/trace/trace.o
00:03:14.024    CC lib/notify/notify.o
00:03:14.024    CC lib/notify/notify_rpc.o
00:03:14.024    CC lib/keyring/keyring.o
00:03:14.024    CC lib/keyring/keyring_rpc.o
00:03:14.282    LIB libspdk_notify.a
00:03:14.282    LIB libspdk_keyring.a
00:03:14.282    LIB libspdk_trace.a
00:03:14.282    LIB libspdk_env_dpdk.a
00:03:14.540    CC lib/thread/thread.o
00:03:14.540    CC lib/thread/iobuf.o
00:03:14.540    CC lib/sock/sock.o
00:03:14.540    CC lib/sock/sock_rpc.o
00:03:15.474    LIB libspdk_sock.a
00:03:15.474    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:15.474    CC lib/nvme/nvme_ctrlr.o
00:03:15.732    CC lib/nvme/nvme_fabric.o
00:03:15.732    CC lib/nvme/nvme_ns_cmd.o
00:03:15.732    CC lib/nvme/nvme_pcie.o
00:03:15.732    CC lib/nvme/nvme_ns.o
00:03:15.732    CC lib/nvme/nvme_qpair.o
00:03:15.732    CC lib/nvme/nvme_pcie_common.o
00:03:15.732    CC lib/nvme/nvme.o
00:03:16.296    CC lib/nvme/nvme_quirks.o
00:03:16.554    CC lib/nvme/nvme_transport.o
00:03:16.554    CC lib/nvme/nvme_discovery.o
00:03:16.554    LIB libspdk_thread.a
00:03:16.554    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:16.554    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:16.554    CC lib/nvme/nvme_tcp.o
00:03:16.812    CC lib/nvme/nvme_opal.o
00:03:16.812    CC lib/nvme/nvme_io_msg.o
00:03:17.070    CC lib/nvme/nvme_poll_group.o
00:03:17.070    CC lib/nvme/nvme_zns.o
00:03:17.327    CC lib/nvme/nvme_stubs.o
00:03:17.327    CC lib/nvme/nvme_auth.o
00:03:17.327    CC lib/nvme/nvme_cuse.o
00:03:17.327    CC lib/nvme/nvme_rdma.o
00:03:17.893    CC lib/accel/accel.o
00:03:17.893    CC lib/blob/blobstore.o
00:03:17.893    CC lib/blob/request.o
00:03:17.893    CC lib/init/json_config.o
00:03:18.151    CC lib/virtio/virtio.o
00:03:18.151    CC lib/init/subsystem.o
00:03:18.409    CC lib/init/subsystem_rpc.o
00:03:18.409    CC lib/init/rpc.o
00:03:18.666    CC lib/virtio/virtio_vhost_user.o
00:03:18.666    CC lib/virtio/virtio_vfio_user.o
00:03:18.666    CC lib/fsdev/fsdev.o
00:03:18.666    LIB libspdk_init.a
00:03:18.666    CC lib/virtio/virtio_pci.o
00:03:18.666    CC lib/accel/accel_rpc.o
00:03:18.923    CC lib/accel/accel_sw.o
00:03:18.923    CC lib/blob/zeroes.o
00:03:18.923    CC lib/fsdev/fsdev_io.o
00:03:19.180    CC lib/event/app.o
00:03:19.180    CC lib/event/reactor.o
00:03:19.180    LIB libspdk_virtio.a
00:03:19.180    CC lib/event/log_rpc.o
00:03:19.437    CC lib/event/app_rpc.o
00:03:19.437    LIB libspdk_accel.a
00:03:19.437    CC lib/event/scheduler_static.o
00:03:19.437    CC lib/fsdev/fsdev_rpc.o
00:03:19.437    CC lib/blob/blob_bs_dev.o
00:03:19.695    CC lib/bdev/bdev.o
00:03:19.695    CC lib/bdev/bdev_rpc.o
00:03:19.695    CC lib/bdev/bdev_zone.o
00:03:19.695    CC lib/bdev/scsi_nvme.o
00:03:19.695    CC lib/bdev/part.o
00:03:19.695    LIB libspdk_event.a
00:03:19.953    LIB libspdk_nvme.a
00:03:19.953    LIB libspdk_fsdev.a
00:03:20.519    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:03:21.454    LIB libspdk_fuse_dispatcher.a
00:03:22.827    LIB libspdk_blob.a
00:03:23.084    CC lib/lvol/lvol.o
00:03:23.084    CC lib/blobfs/blobfs.o
00:03:23.084    CC lib/blobfs/tree.o
00:03:24.018    LIB libspdk_bdev.a
00:03:24.019    CC lib/ftl/ftl_core.o
00:03:24.019    CC lib/ftl/ftl_init.o
00:03:24.019    CC lib/ftl/ftl_layout.o
00:03:24.019    CC lib/ftl/ftl_debug.o
00:03:24.276    CC lib/ublk/ublk.o
00:03:24.276    CC lib/nvmf/ctrlr.o
00:03:24.276    CC lib/nbd/nbd.o
00:03:24.276    CC lib/scsi/dev.o
00:03:24.276    LIB libspdk_blobfs.a
00:03:24.276    CC lib/scsi/lun.o
00:03:24.611    CC lib/scsi/port.o
00:03:24.611    LIB libspdk_lvol.a
00:03:24.611    CC lib/scsi/scsi.o
00:03:24.611    CC lib/nbd/nbd_rpc.o
00:03:24.611    CC lib/ftl/ftl_io.o
00:03:24.611    CC lib/scsi/scsi_bdev.o
00:03:24.611    CC lib/nvmf/ctrlr_discovery.o
00:03:24.611    CC lib/nvmf/ctrlr_bdev.o
00:03:24.894    CC lib/ftl/ftl_sb.o
00:03:24.894    CC lib/ftl/ftl_l2p.o
00:03:24.894    LIB libspdk_nbd.a
00:03:24.894    CC lib/ftl/ftl_l2p_flat.o
00:03:24.894    CC lib/ftl/ftl_nv_cache.o
00:03:24.894    CC lib/ublk/ublk_rpc.o
00:03:25.153    CC lib/ftl/ftl_band.o
00:03:25.153    CC lib/ftl/ftl_band_ops.o
00:03:25.153    CC lib/ftl/ftl_writer.o
00:03:25.153    CC lib/ftl/ftl_rq.o
00:03:25.411    CC lib/scsi/scsi_pr.o
00:03:25.411    CC lib/ftl/ftl_reloc.o
00:03:25.670    CC lib/ftl/ftl_l2p_cache.o
00:03:25.670    CC lib/ftl/ftl_p2l.o
00:03:25.670    CC lib/nvmf/subsystem.o
00:03:25.670    LIB libspdk_ublk.a
00:03:25.670    CC lib/scsi/scsi_rpc.o
00:03:25.670    CC lib/ftl/ftl_p2l_log.o
00:03:25.670    CC lib/ftl/mngt/ftl_mngt.o
00:03:25.929    CC lib/scsi/task.o
00:03:25.929    CC lib/nvmf/nvmf.o
00:03:26.187    CC lib/nvmf/nvmf_rpc.o
00:03:26.187    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:26.187    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:26.187    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:26.448    LIB libspdk_scsi.a
00:03:26.448    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:26.448    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:26.448    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:26.448    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:26.708    CC lib/iscsi/conn.o
00:03:26.708    CC lib/vhost/vhost.o
00:03:26.708    CC lib/vhost/vhost_rpc.o
00:03:26.708    CC lib/vhost/vhost_scsi.o
00:03:26.965    CC lib/vhost/vhost_blk.o
00:03:26.965    CC lib/vhost/rte_vhost_user.o
00:03:26.965    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:27.531    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:27.531    CC lib/iscsi/init_grp.o
00:03:27.531    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:27.531    CC lib/nvmf/transport.o
00:03:27.531    CC lib/nvmf/tcp.o
00:03:27.789    CC lib/iscsi/iscsi.o
00:03:27.789    CC lib/iscsi/param.o
00:03:27.789    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:27.789    CC lib/iscsi/portal_grp.o
00:03:27.789    CC lib/iscsi/tgt_node.o
00:03:28.046    CC lib/iscsi/iscsi_subsystem.o
00:03:28.046    CC lib/iscsi/iscsi_rpc.o
00:03:28.046    CC lib/iscsi/task.o
00:03:28.304    CC lib/nvmf/stubs.o
00:03:28.304    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:28.304    LIB libspdk_vhost.a
00:03:28.304    CC lib/ftl/utils/ftl_conf.o
00:03:28.304    CC lib/nvmf/mdns_server.o
00:03:28.562    CC lib/nvmf/rdma.o
00:03:28.562    CC lib/nvmf/auth.o
00:03:28.562    CC lib/ftl/utils/ftl_md.o
00:03:28.562    CC lib/ftl/utils/ftl_mempool.o
00:03:28.562    CC lib/ftl/utils/ftl_bitmap.o
00:03:28.562    CC lib/ftl/utils/ftl_property.o
00:03:28.819    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:28.819    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:28.819    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:29.077    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:29.077    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:29.077    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:29.077    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:03:29.077    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:29.335    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:29.335    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:29.335    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:29.335    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:03:29.335    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:03:29.335    CC lib/ftl/base/ftl_base_dev.o
00:03:29.335    CC lib/ftl/base/ftl_base_bdev.o
00:03:29.594    CC lib/ftl/ftl_trace.o
00:03:29.594    LIB libspdk_iscsi.a
00:03:29.852    LIB libspdk_ftl.a
00:03:31.775    LIB libspdk_nvmf.a
00:03:32.055    CC module/env_dpdk/env_dpdk_rpc.o
00:03:32.055    CC module/fsdev/aio/fsdev_aio.o
00:03:32.055    CC module/accel/iaa/accel_iaa.o
00:03:32.055    CC module/blob/bdev/blob_bdev.o
00:03:32.055    CC module/accel/ioat/accel_ioat.o
00:03:32.055    CC module/accel/dsa/accel_dsa.o
00:03:32.055    CC module/keyring/file/keyring.o
00:03:32.055    CC module/accel/error/accel_error.o
00:03:32.055    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:32.055    CC module/sock/posix/posix.o
00:03:32.313    LIB libspdk_env_dpdk_rpc.a
00:03:32.313    CC module/keyring/file/keyring_rpc.o
00:03:32.313    CC module/accel/ioat/accel_ioat_rpc.o
00:03:32.313    CC module/accel/iaa/accel_iaa_rpc.o
00:03:32.313    LIB libspdk_scheduler_dynamic.a
00:03:32.569    CC module/accel/dsa/accel_dsa_rpc.o
00:03:32.569    LIB libspdk_keyring_file.a
00:03:32.569    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:32.569    LIB libspdk_blob_bdev.a
00:03:32.569    LIB libspdk_accel_ioat.a
00:03:32.569    CC module/accel/error/accel_error_rpc.o
00:03:32.569    LIB libspdk_accel_iaa.a
00:03:32.569    CC module/fsdev/aio/fsdev_aio_rpc.o
00:03:32.569    CC module/scheduler/gscheduler/gscheduler.o
00:03:32.569    LIB libspdk_accel_dsa.a
00:03:32.827    LIB libspdk_scheduler_dpdk_governor.a
00:03:32.827    CC module/keyring/linux/keyring.o
00:03:32.827    LIB libspdk_accel_error.a
00:03:32.827    CC module/keyring/linux/keyring_rpc.o
00:03:32.827    CC module/fsdev/aio/linux_aio_mgr.o
00:03:32.827    LIB libspdk_scheduler_gscheduler.a
00:03:32.827    CC module/bdev/delay/vbdev_delay.o
00:03:32.827    CC module/blobfs/bdev/blobfs_bdev.o
00:03:32.827    CC module/bdev/error/vbdev_error.o
00:03:32.827    CC module/bdev/error/vbdev_error_rpc.o
00:03:32.827    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:33.084    LIB libspdk_keyring_linux.a
00:03:33.084    CC module/bdev/gpt/gpt.o
00:03:33.084    CC module/bdev/gpt/vbdev_gpt.o
00:03:33.084    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:33.084    LIB libspdk_fsdev_aio.a
00:03:33.084    LIB libspdk_sock_posix.a
00:03:33.084    LIB libspdk_blobfs_bdev.a
00:03:33.341    LIB libspdk_bdev_error.a
00:03:33.341    CC module/bdev/lvol/vbdev_lvol.o
00:03:33.341    CC module/bdev/null/bdev_null.o
00:03:33.341    CC module/bdev/malloc/bdev_malloc.o
00:03:33.341    LIB libspdk_bdev_delay.a
00:03:33.341    CC module/bdev/nvme/bdev_nvme.o
00:03:33.341    LIB libspdk_bdev_gpt.a
00:03:33.341    CC module/bdev/passthru/vbdev_passthru.o
00:03:33.341    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:33.600    CC module/bdev/raid/bdev_raid.o
00:03:33.600    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:33.600    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:33.600    CC module/bdev/split/vbdev_split.o
00:03:33.600    CC module/bdev/null/bdev_null_rpc.o
00:03:33.858    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:33.858    CC module/bdev/split/vbdev_split_rpc.o
00:03:33.858    LIB libspdk_bdev_passthru.a
00:03:33.858    LIB libspdk_bdev_null.a
00:03:33.858    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:33.858    LIB libspdk_bdev_malloc.a
00:03:33.858    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:34.116    LIB libspdk_bdev_split.a
00:03:34.116    CC module/bdev/aio/bdev_aio.o
00:03:34.116    CC module/bdev/ftl/bdev_ftl.o
00:03:34.116    CC module/bdev/nvme/nvme_rpc.o
00:03:34.116    CC module/bdev/iscsi/bdev_iscsi.o
00:03:34.116    LIB libspdk_bdev_zone_block.a
00:03:34.116    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:34.116    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:34.373    CC module/bdev/raid/bdev_raid_rpc.o
00:03:34.373    CC module/bdev/raid/bdev_raid_sb.o
00:03:34.373    CC module/bdev/raid/raid0.o
00:03:34.373    LIB libspdk_bdev_lvol.a
00:03:34.373    LIB libspdk_bdev_ftl.a
00:03:34.373    CC module/bdev/aio/bdev_aio_rpc.o
00:03:34.631    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:34.631    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:34.631    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:34.631    CC module/bdev/raid/raid1.o
00:03:34.631    LIB libspdk_bdev_aio.a
00:03:34.631    LIB libspdk_bdev_iscsi.a
00:03:34.631    CC module/bdev/raid/concat.o
00:03:34.631    CC module/bdev/nvme/bdev_mdns_client.o
00:03:34.631    CC module/bdev/nvme/vbdev_opal.o
00:03:34.890    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:34.890    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:34.890    LIB libspdk_bdev_virtio.a
00:03:35.148    LIB libspdk_bdev_raid.a
00:03:37.050    LIB libspdk_bdev_nvme.a
00:03:37.616    CC module/event/subsystems/iobuf/iobuf.o
00:03:37.616    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:37.616    CC module/event/subsystems/keyring/keyring.o
00:03:37.616    CC module/event/subsystems/sock/sock.o
00:03:37.616    CC module/event/subsystems/fsdev/fsdev.o
00:03:37.616    CC module/event/subsystems/scheduler/scheduler.o
00:03:37.616    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:37.616    CC module/event/subsystems/vmd/vmd.o
00:03:37.616    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:37.875    LIB libspdk_event_fsdev.a
00:03:37.875    LIB libspdk_event_sock.a
00:03:37.875    LIB libspdk_event_keyring.a
00:03:37.875    LIB libspdk_event_scheduler.a
00:03:37.875    LIB libspdk_event_vmd.a
00:03:37.875    LIB libspdk_event_iobuf.a
00:03:37.875    LIB libspdk_event_vhost_blk.a
00:03:38.133    CC module/event/subsystems/accel/accel.o
00:03:38.391    LIB libspdk_event_accel.a
00:03:38.959    CC module/event/subsystems/bdev/bdev.o
00:03:38.959    LIB libspdk_event_bdev.a
00:03:39.217    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:39.217    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:39.217    CC module/event/subsystems/scsi/scsi.o
00:03:39.217    CC module/event/subsystems/ublk/ublk.o
00:03:39.217    CC module/event/subsystems/nbd/nbd.o
00:03:39.475    LIB libspdk_event_ublk.a
00:03:39.475    LIB libspdk_event_nbd.a
00:03:39.475    LIB libspdk_event_scsi.a
00:03:39.734    LIB libspdk_event_nvmf.a
00:03:39.734    CC module/event/subsystems/iscsi/iscsi.o
00:03:39.734    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:39.992    LIB libspdk_event_iscsi.a
00:03:39.992    LIB libspdk_event_vhost_scsi.a
00:03:40.251    CC app/spdk_lspci/spdk_lspci.o
00:03:40.510    CC app/trace_record/trace_record.o
00:03:40.510    CXX app/trace/trace.o
00:03:40.510    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:40.510    CC app/nvmf_tgt/nvmf_main.o
00:03:40.510    CC app/spdk_tgt/spdk_tgt.o
00:03:40.510    CC app/iscsi_tgt/iscsi_tgt.o
00:03:40.510    CC test/thread/poller_perf/poller_perf.o
00:03:40.510    CC examples/ioat/perf/perf.o
00:03:40.510    CC examples/util/zipf/zipf.o
00:03:40.510    LINK spdk_lspci
00:03:40.510    LINK interrupt_tgt
00:03:40.769    LINK poller_perf
00:03:40.769    LINK nvmf_tgt
00:03:40.769    LINK spdk_trace_record
00:03:40.769    LINK spdk_tgt
00:03:40.769    LINK iscsi_tgt
00:03:40.769    LINK zipf
00:03:40.769    LINK ioat_perf
00:03:41.028    LINK spdk_trace
00:03:41.286    CC test/thread/lock/spdk_lock.o
00:03:41.286    CC examples/ioat/verify/verify.o
00:03:41.545    CC app/spdk_nvme_perf/perf.o
00:03:41.545    CC test/dma/test_dma/test_dma.o
00:03:41.545    CC test/app/bdev_svc/bdev_svc.o
00:03:41.545    LINK verify
00:03:41.809    LINK bdev_svc
00:03:41.809    CC app/spdk_nvme_identify/identify.o
00:03:42.067    LINK test_dma
00:03:42.067    CC app/spdk_nvme_discover/discovery_aer.o
00:03:42.325    LINK spdk_nvme_discover
00:03:42.325    CC examples/thread/thread/thread_ex.o
00:03:42.583    LINK spdk_nvme_perf
00:03:42.583    LINK thread
00:03:42.841    LINK spdk_nvme_identify
00:03:43.099    CC examples/sock/hello_world/hello_sock.o
00:03:43.099    CC examples/vmd/lsvmd/lsvmd.o
00:03:43.358    CC examples/idxd/perf/perf.o
00:03:43.358    LINK lsvmd
00:03:43.358    LINK spdk_lock
00:03:43.358    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:43.358    LINK hello_sock
00:03:43.358    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:43.616    LINK idxd_perf
00:03:43.875    CC app/spdk_top/spdk_top.o
00:03:43.875    LINK nvme_fuzz
00:03:44.133    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:44.133    CC test/app/histogram_perf/histogram_perf.o
00:03:44.133    CC test/app/jsoncat/jsoncat.o
00:03:44.133    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:44.133    LINK histogram_perf
00:03:44.133    LINK jsoncat
00:03:44.133    CC test/app/stub/stub.o
00:03:44.133    CC examples/vmd/led/led.o
00:03:44.393    CC examples/nvme/hello_world/hello_world.o
00:03:44.393    LINK led
00:03:44.393    LINK stub
00:03:44.651    CC examples/nvme/reconnect/reconnect.o
00:03:44.651    LINK hello_world
00:03:44.651    LINK vhost_fuzz
00:03:44.910    CC app/vhost/vhost.o
00:03:44.910    LINK spdk_top
00:03:44.910    CC app/spdk_dd/spdk_dd.o
00:03:44.910    TEST_HEADER include/spdk/accel.h
00:03:44.910    TEST_HEADER include/spdk/accel_module.h
00:03:44.910    TEST_HEADER include/spdk/assert.h
00:03:44.910    TEST_HEADER include/spdk/barrier.h
00:03:44.910    TEST_HEADER include/spdk/base64.h
00:03:44.910    TEST_HEADER include/spdk/bdev.h
00:03:44.910    TEST_HEADER include/spdk/bdev_module.h
00:03:44.910    TEST_HEADER include/spdk/bdev_zone.h
00:03:44.910    TEST_HEADER include/spdk/bit_array.h
00:03:44.910    TEST_HEADER include/spdk/bit_pool.h
00:03:44.910    TEST_HEADER include/spdk/blob.h
00:03:44.910    TEST_HEADER include/spdk/blob_bdev.h
00:03:44.910    TEST_HEADER include/spdk/blobfs.h
00:03:44.910    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:44.910    TEST_HEADER include/spdk/conf.h
00:03:44.910    TEST_HEADER include/spdk/config.h
00:03:44.910    TEST_HEADER include/spdk/cpuset.h
00:03:44.910    TEST_HEADER include/spdk/crc16.h
00:03:44.910    TEST_HEADER include/spdk/crc32.h
00:03:44.910    TEST_HEADER include/spdk/crc64.h
00:03:44.910    TEST_HEADER include/spdk/dif.h
00:03:44.910    TEST_HEADER include/spdk/dma.h
00:03:44.910    TEST_HEADER include/spdk/endian.h
00:03:44.910    TEST_HEADER include/spdk/env.h
00:03:44.910    TEST_HEADER include/spdk/env_dpdk.h
00:03:44.910    TEST_HEADER include/spdk/event.h
00:03:44.910    TEST_HEADER include/spdk/fd.h
00:03:44.910    TEST_HEADER include/spdk/fd_group.h
00:03:44.910    TEST_HEADER include/spdk/file.h
00:03:44.910    TEST_HEADER include/spdk/fsdev.h
00:03:44.910    TEST_HEADER include/spdk/fsdev_module.h
00:03:44.910    LINK reconnect
00:03:44.910    TEST_HEADER include/spdk/ftl.h
00:03:44.910    TEST_HEADER include/spdk/gpt_spec.h
00:03:44.910    TEST_HEADER include/spdk/hexlify.h
00:03:44.910    TEST_HEADER include/spdk/histogram_data.h
00:03:44.910    TEST_HEADER include/spdk/idxd.h
00:03:44.910    TEST_HEADER include/spdk/idxd_spec.h
00:03:44.910    TEST_HEADER include/spdk/init.h
00:03:44.910    TEST_HEADER include/spdk/ioat.h
00:03:44.910    TEST_HEADER include/spdk/ioat_spec.h
00:03:44.910    TEST_HEADER include/spdk/iscsi_spec.h
00:03:44.910    TEST_HEADER include/spdk/json.h
00:03:44.910    TEST_HEADER include/spdk/jsonrpc.h
00:03:44.910    TEST_HEADER include/spdk/keyring.h
00:03:44.910    TEST_HEADER include/spdk/keyring_module.h
00:03:44.910    TEST_HEADER include/spdk/likely.h
00:03:44.910    TEST_HEADER include/spdk/log.h
00:03:44.910    TEST_HEADER include/spdk/lvol.h
00:03:44.910    TEST_HEADER include/spdk/md5.h
00:03:44.910    TEST_HEADER include/spdk/memory.h
00:03:44.910    TEST_HEADER include/spdk/mmio.h
00:03:44.910    TEST_HEADER include/spdk/nbd.h
00:03:44.910    TEST_HEADER include/spdk/net.h
00:03:44.910    TEST_HEADER include/spdk/notify.h
00:03:44.910    TEST_HEADER include/spdk/nvme.h
00:03:44.910    TEST_HEADER include/spdk/nvme_intel.h
00:03:44.910    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:44.910    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:44.910    TEST_HEADER include/spdk/nvme_spec.h
00:03:44.910    TEST_HEADER include/spdk/nvme_zns.h
00:03:44.910    TEST_HEADER include/spdk/nvmf.h
00:03:44.910    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:44.910    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:44.910    TEST_HEADER include/spdk/nvmf_spec.h
00:03:44.910    TEST_HEADER include/spdk/nvmf_transport.h
00:03:44.910    TEST_HEADER include/spdk/opal.h
00:03:44.910    TEST_HEADER include/spdk/opal_spec.h
00:03:44.910    TEST_HEADER include/spdk/pci_ids.h
00:03:44.910    TEST_HEADER include/spdk/pipe.h
00:03:44.910    TEST_HEADER include/spdk/queue.h
00:03:44.910    LINK vhost
00:03:44.910    TEST_HEADER include/spdk/reduce.h
00:03:44.910    TEST_HEADER include/spdk/rpc.h
00:03:44.910    TEST_HEADER include/spdk/scheduler.h
00:03:44.910    TEST_HEADER include/spdk/scsi.h
00:03:45.169    TEST_HEADER include/spdk/scsi_spec.h
00:03:45.169    TEST_HEADER include/spdk/sock.h
00:03:45.169    TEST_HEADER include/spdk/stdinc.h
00:03:45.169    TEST_HEADER include/spdk/string.h
00:03:45.169    TEST_HEADER include/spdk/thread.h
00:03:45.169    TEST_HEADER include/spdk/trace.h
00:03:45.169    TEST_HEADER include/spdk/trace_parser.h
00:03:45.169    TEST_HEADER include/spdk/tree.h
00:03:45.169    TEST_HEADER include/spdk/ublk.h
00:03:45.169    TEST_HEADER include/spdk/util.h
00:03:45.169    TEST_HEADER include/spdk/uuid.h
00:03:45.169    TEST_HEADER include/spdk/version.h
00:03:45.169    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:45.169    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:45.169    TEST_HEADER include/spdk/vhost.h
00:03:45.169    TEST_HEADER include/spdk/vmd.h
00:03:45.169    TEST_HEADER include/spdk/xor.h
00:03:45.169    TEST_HEADER include/spdk/zipf.h
00:03:45.169    CXX test/cpp_headers/accel.o
00:03:45.427    CXX test/cpp_headers/accel_module.o
00:03:45.427    LINK spdk_dd
00:03:45.427    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:45.427    CC examples/nvme/arbitration/arbitration.o
00:03:45.427    CC test/event/event_perf/event_perf.o
00:03:45.427    CC test/env/mem_callbacks/mem_callbacks.o
00:03:45.685    LINK iscsi_fuzz
00:03:45.685    CXX test/cpp_headers/assert.o
00:03:45.685    CC app/fio/nvme/fio_plugin.o
00:03:45.685    LINK event_perf
00:03:45.943    CXX test/cpp_headers/barrier.o
00:03:45.943    LINK arbitration
00:03:45.943    CC test/event/reactor/reactor.o
00:03:45.943    CXX test/cpp_headers/base64.o
00:03:45.943    LINK reactor
00:03:46.201    LINK mem_callbacks
00:03:46.201    LINK nvme_manage
00:03:46.201    CXX test/cpp_headers/bdev.o
00:03:46.201    CXX test/cpp_headers/bdev_module.o
00:03:46.460    LINK spdk_nvme
00:03:46.460    CXX test/cpp_headers/bdev_zone.o
00:03:46.460    CC test/env/vtophys/vtophys.o
00:03:46.717    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:46.717    CC test/nvme/aer/aer.o
00:03:46.717    LINK vtophys
00:03:46.717    CXX test/cpp_headers/bit_array.o
00:03:46.717    LINK env_dpdk_post_init
00:03:46.717    CC test/event/reactor_perf/reactor_perf.o
00:03:46.977    CC test/event/app_repeat/app_repeat.o
00:03:46.977    CXX test/cpp_headers/bit_pool.o
00:03:46.977    LINK reactor_perf
00:03:46.977    LINK app_repeat
00:03:46.977    LINK aer
00:03:47.236    CC examples/nvme/hotplug/hotplug.o
00:03:47.236    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:47.236    CXX test/cpp_headers/blob.o
00:03:47.236    CC app/fio/bdev/fio_plugin.o
00:03:47.493    CXX test/cpp_headers/blob_bdev.o
00:03:47.493    LINK hotplug
00:03:47.493    LINK cmb_copy
00:03:47.493    CC examples/nvme/abort/abort.o
00:03:47.493    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:47.493    CXX test/cpp_headers/blobfs.o
00:03:47.752    LINK pmr_persistence
00:03:47.752    CC test/env/memory/memory_ut.o
00:03:47.752    CC test/env/pci/pci_ut.o
00:03:48.010    CC test/nvme/reset/reset.o
00:03:48.010    CC test/event/scheduler/scheduler.o
00:03:48.010    CXX test/cpp_headers/blobfs_bdev.o
00:03:48.010    LINK abort
00:03:48.268    LINK spdk_bdev
00:03:48.268    CC test/rpc_client/rpc_client_test.o
00:03:48.268    CXX test/cpp_headers/conf.o
00:03:48.268    LINK pci_ut
00:03:48.268    LINK scheduler
00:03:48.268    LINK reset
00:03:48.527    CXX test/cpp_headers/config.o
00:03:48.527    CXX test/cpp_headers/cpuset.o
00:03:48.527    LINK rpc_client_test
00:03:48.527    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:03:48.527    CC test/nvme/sgl/sgl.o
00:03:48.785    CXX test/cpp_headers/crc16.o
00:03:48.785    CXX test/cpp_headers/crc32.o
00:03:48.785    LINK histogram_ut
00:03:49.044    CXX test/cpp_headers/crc64.o
00:03:49.044    LINK sgl
00:03:49.044    CXX test/cpp_headers/dif.o
00:03:49.044    CXX test/cpp_headers/dma.o
00:03:49.044    CC examples/accel/perf/accel_perf.o
00:03:49.044    LINK memory_ut
00:03:49.302    CC test/unit/lib/log/log.c/log_ut.o
00:03:49.302    CXX test/cpp_headers/endian.o
00:03:49.302    CC test/accel/dif/dif.o
00:03:49.302    CC test/blobfs/mkfs/mkfs.o
00:03:49.302    CXX test/cpp_headers/env.o
00:03:49.560    CC examples/blob/hello_world/hello_blob.o
00:03:49.560    CC examples/fsdev/hello_world/hello_fsdev.o
00:03:49.560    LINK log_ut
00:03:49.560    LINK mkfs
00:03:49.560    CXX test/cpp_headers/env_dpdk.o
00:03:49.560    LINK accel_perf
00:03:49.819    CXX test/cpp_headers/event.o
00:03:49.819    LINK hello_blob
00:03:49.819    LINK hello_fsdev
00:03:49.819    CC test/nvme/e2edp/nvme_dp.o
00:03:50.077    CC test/unit/lib/rdma/common.c/common_ut.o
00:03:50.077    CXX test/cpp_headers/fd.o
00:03:50.077    CXX test/cpp_headers/fd_group.o
00:03:50.077    CXX test/cpp_headers/file.o
00:03:50.335    LINK dif
00:03:50.335    LINK nvme_dp
00:03:50.335    CC examples/blob/cli/blobcli.o
00:03:50.335    CXX test/cpp_headers/fsdev.o
00:03:50.335    CXX test/cpp_headers/fsdev_module.o
00:03:50.593    CC test/lvol/esnap/esnap.o
00:03:50.593    CXX test/cpp_headers/ftl.o
00:03:50.852    CC examples/bdev/hello_world/hello_bdev.o
00:03:51.111    LINK common_ut
00:03:51.111    CXX test/cpp_headers/gpt_spec.o
00:03:51.111    LINK hello_bdev
00:03:51.111    LINK blobcli
00:03:51.111    CC test/nvme/overhead/overhead.o
00:03:51.369    CXX test/cpp_headers/hexlify.o
00:03:51.369    CC test/unit/lib/util/base64.c/base64_ut.o
00:03:51.628    CXX test/cpp_headers/histogram_data.o
00:03:51.628    LINK overhead
00:03:51.628    CXX test/cpp_headers/idxd.o
00:03:51.628    LINK base64_ut
00:03:51.887    CXX test/cpp_headers/idxd_spec.o
00:03:51.887    CC test/unit/lib/dma/dma.c/dma_ut.o
00:03:51.887    CXX test/cpp_headers/init.o
00:03:51.887    CXX test/cpp_headers/ioat.o
00:03:51.887    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:03:52.148    CC test/nvme/err_injection/err_injection.o
00:03:52.148    CC test/nvme/startup/startup.o
00:03:52.148    CXX test/cpp_headers/ioat_spec.o
00:03:52.148    LINK err_injection
00:03:52.407    CXX test/cpp_headers/iscsi_spec.o
00:03:52.407    LINK startup
00:03:52.407    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:03:52.407    CXX test/cpp_headers/json.o
00:03:52.664    CC examples/bdev/bdevperf/bdevperf.o
00:03:52.664    CXX test/cpp_headers/jsonrpc.o
00:03:52.664    LINK bit_array_ut
00:03:52.922    LINK dma_ut
00:03:52.922    CXX test/cpp_headers/keyring.o
00:03:53.181    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:03:53.181    LINK ioat_ut
00:03:53.181    CC test/nvme/reserve/reserve.o
00:03:53.181    CC test/nvme/connect_stress/connect_stress.o
00:03:53.181    CC test/nvme/simple_copy/simple_copy.o
00:03:53.181    CC test/nvme/boot_partition/boot_partition.o
00:03:53.439    CC test/nvme/compliance/nvme_compliance.o
00:03:53.439    CXX test/cpp_headers/keyring_module.o
00:03:53.439    CXX test/cpp_headers/likely.o
00:03:53.439    LINK reserve
00:03:53.439    LINK cpuset_ut
00:03:53.439    LINK boot_partition
00:03:53.439    LINK connect_stress
00:03:53.698    CXX test/cpp_headers/log.o
00:03:53.698    LINK simple_copy
00:03:53.698    LINK bdevperf
00:03:53.698    CC test/nvme/fused_ordering/fused_ordering.o
00:03:53.698    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:03:54.265    LINK nvme_compliance
00:03:54.265    CXX test/cpp_headers/lvol.o
00:03:54.265    LINK crc16_ut
00:03:54.265    LINK fused_ordering
00:03:54.265    CXX test/cpp_headers/md5.o
00:03:54.265    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:03:54.523    CXX test/cpp_headers/memory.o
00:03:54.523    CXX test/cpp_headers/mmio.o
00:03:54.523    CXX test/cpp_headers/nbd.o
00:03:54.523    CXX test/cpp_headers/net.o
00:03:54.523    LINK crc32_ieee_ut
00:03:54.523    CXX test/cpp_headers/notify.o
00:03:54.781    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:54.781    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:03:54.781    CC test/nvme/fdp/fdp.o
00:03:54.781    CXX test/cpp_headers/nvme.o
00:03:54.781    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:03:54.781    LINK crc32c_ut
00:03:54.781    CC test/nvme/cuse/cuse.o
00:03:54.781    LINK crc64_ut
00:03:54.781    LINK doorbell_aers
00:03:55.039    CXX test/cpp_headers/nvme_intel.o
00:03:55.039    CXX test/cpp_headers/nvme_ocssd.o
00:03:55.039    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:55.039    LINK fdp
00:03:55.297    CC test/unit/lib/util/dif.c/dif_ut.o
00:03:55.297    CC test/unit/lib/util/file.c/file_ut.o
00:03:55.297    CC test/unit/lib/util/iov.c/iov_ut.o
00:03:55.297    CXX test/cpp_headers/nvme_spec.o
00:03:55.556    CC test/unit/lib/util/math.c/math_ut.o
00:03:55.556    LINK file_ut
00:03:55.556    CXX test/cpp_headers/nvme_zns.o
00:03:55.556    LINK iov_ut
00:03:55.556    LINK math_ut
00:03:55.814    CXX test/cpp_headers/nvmf.o
00:03:55.814    CXX test/cpp_headers/nvmf_cmd.o
00:03:55.814    CC test/unit/lib/util/net.c/net_ut.o
00:03:55.814    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:55.814    CXX test/cpp_headers/nvmf_spec.o
00:03:56.073    LINK net_ut
00:03:56.073    CC examples/nvmf/nvmf/nvmf.o
00:03:56.073    CXX test/cpp_headers/nvmf_transport.o
00:03:56.073    CXX test/cpp_headers/opal.o
00:03:56.073    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:03:56.073    CC test/unit/lib/util/string.c/string_ut.o
00:03:56.073    CXX test/cpp_headers/opal_spec.o
00:03:56.331    CXX test/cpp_headers/pci_ids.o
00:03:56.331    CC test/unit/lib/util/xor.c/xor_ut.o
00:03:56.331    CC test/unit/lib/util/fd_group.c/fd_group_ut.o
00:03:56.331    LINK nvmf
00:03:56.331    LINK string_ut
00:03:56.331    CC test/bdev/bdevio/bdevio.o
00:03:56.331    LINK cuse
00:03:56.331    CXX test/cpp_headers/pipe.o
00:03:56.589    LINK dif_ut
00:03:56.589    CXX test/cpp_headers/queue.o
00:03:56.589    CXX test/cpp_headers/reduce.o
00:03:56.589    CXX test/cpp_headers/rpc.o
00:03:56.589    CXX test/cpp_headers/scheduler.o
00:03:56.847    CXX test/cpp_headers/scsi.o
00:03:56.847    CXX test/cpp_headers/scsi_spec.o
00:03:56.847    LINK fd_group_ut
00:03:56.847    LINK pipe_ut
00:03:56.847    CXX test/cpp_headers/sock.o
00:03:56.847    CXX test/cpp_headers/stdinc.o
00:03:56.847    LINK bdevio
00:03:56.847    LINK xor_ut
00:03:57.106    CXX test/cpp_headers/string.o
00:03:57.106    CXX test/cpp_headers/thread.o
00:03:57.106    CXX test/cpp_headers/trace.o
00:03:57.106    CXX test/cpp_headers/trace_parser.o
00:03:57.106    CXX test/cpp_headers/tree.o
00:03:57.106    CXX test/cpp_headers/ublk.o
00:03:57.106    CXX test/cpp_headers/util.o
00:03:57.106    CXX test/cpp_headers/uuid.o
00:03:57.365    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:03:57.365    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:03:57.365    CXX test/cpp_headers/version.o
00:03:57.365    CXX test/cpp_headers/vfio_user_pci.o
00:03:57.365    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:03:57.365    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:03:57.365    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:03:57.365    LINK esnap
00:03:57.365    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:03:57.623    CXX test/cpp_headers/vfio_user_spec.o
00:03:57.623    CXX test/cpp_headers/vhost.o
00:03:57.881    CXX test/cpp_headers/vmd.o
00:03:57.881    CXX test/cpp_headers/xor.o
00:03:57.881    LINK pci_event_ut
00:03:57.881    LINK json_util_ut
00:03:58.140    CXX test/cpp_headers/zipf.o
00:03:58.140    LINK idxd_user_ut
00:03:58.398    LINK json_write_ut
00:03:58.691    LINK idxd_ut
00:03:59.664    LINK json_parse_ut
00:04:00.231    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:04:00.799    LINK jsonrpc_server_ut
00:04:01.365    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:04:02.743    LINK rpc_ut
00:04:03.002    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:04:03.002    CC test/unit/lib/thread/thread.c/thread_ut.o
00:04:03.002    CC test/unit/lib/sock/sock.c/sock_ut.o
00:04:03.002    CC test/unit/lib/sock/posix.c/posix_ut.o
00:04:03.261    CC test/unit/lib/notify/notify.c/notify_ut.o
00:04:03.261    CC test/unit/lib/keyring/keyring.c/keyring_ut.o
00:04:03.827    LINK keyring_ut
00:04:04.120    LINK notify_ut
00:04:04.397    LINK iobuf_ut
00:04:04.656    LINK posix_ut
00:04:05.224    LINK sock_ut
00:04:05.790    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:04:05.790    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:04:06.049    LINK thread_ut
00:04:06.308    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:04:07.243    LINK nvme_ns_ut
00:04:07.243    LINK nvme_ctrlr_ocssd_cmd_ut
00:04:07.243    LINK nvme_poll_group_ut
00:04:07.501    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:04:07.501    LINK nvme_ctrlr_cmd_ut
00:04:07.501    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:04:07.501    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:04:07.760    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:04:08.022    LINK nvme_ut
00:04:08.022    LINK nvme_ns_ocssd_cmd_ut
00:04:08.022    LINK nvme_qpair_ut
00:04:08.022    LINK nvme_quirks_ut
00:04:08.281    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:04:08.281    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:04:08.281    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:04:08.281    CC test/unit/lib/accel/accel.c/accel_ut.o
00:04:08.539    LINK nvme_ns_cmd_ut
00:04:08.539    LINK nvme_pcie_ut
00:04:08.797    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:04:08.797    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:04:09.364    LINK nvme_transport_ut
00:04:09.364    LINK nvme_io_msg_ut
00:04:09.622    LINK nvme_opal_ut
00:04:09.622    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:04:09.622    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:04:09.622    LINK nvme_fabric_ut
00:04:09.893    CC test/unit/lib/init/rpc.c/rpc_ut.o
00:04:09.893    LINK nvme_ctrlr_ut
00:04:09.893    CC test/unit/lib/blob/blob.c/blob_ut.o
00:04:10.152    CC test/unit/lib/fsdev/fsdev.c/fsdev_ut.o
00:04:10.410    LINK nvme_pcie_common_ut
00:04:10.410    LINK rpc_ut
00:04:10.667    LINK blob_bdev_ut
00:04:10.667    LINK subsystem_ut
00:04:10.924    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:04:10.924    CC test/unit/lib/event/app.c/app_ut.o
00:04:11.182    LINK nvme_tcp_ut
00:04:11.182    LINK nvme_cuse_ut
00:04:11.439    LINK fsdev_ut
00:04:12.005    LINK accel_ut
00:04:12.005    LINK nvme_rdma_ut
00:04:12.005    LINK app_ut
00:04:12.263    LINK reactor_ut
00:04:12.520    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:04:12.520    CC test/unit/lib/bdev/part.c/part_ut.o
00:04:12.521    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:04:12.521    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:04:12.521    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:04:12.521    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:04:12.521    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:04:12.521    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:04:12.779    LINK scsi_nvme_ut
00:04:12.779    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:04:12.779    LINK bdev_zone_ut
00:04:13.037    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:04:13.037    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:04:13.294    LINK gpt_ut
00:04:13.551    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:04:14.118    LINK vbdev_zone_block_ut
00:04:14.118    LINK bdev_raid_sb_ut
00:04:14.376    LINK vbdev_lvol_ut
00:04:14.376    CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o
00:04:14.376    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:04:14.376    LINK concat_ut
00:04:15.311    LINK raid1_ut
00:04:15.311    LINK bdev_raid_ut
00:04:15.311    LINK raid0_ut
00:04:17.215    LINK part_ut
00:04:17.781    LINK bdev_ut
00:04:19.160    LINK blob_ut
00:04:19.418    LINK bdev_ut
00:04:19.985    LINK bdev_nvme_ut
00:04:19.985    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:04:19.985    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:04:19.985    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:04:19.985    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:04:19.985    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:04:20.297    LINK blobfs_bdev_ut
00:04:20.297    LINK tree_ut
00:04:20.559    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:04:20.559    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:04:20.559    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:04:20.559    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:04:20.559    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:04:20.559    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:04:20.559    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:04:21.126    LINK dev_ut
00:04:21.126    LINK ftl_l2p_ut
00:04:21.385    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:04:21.385    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:04:21.643    LINK ftl_io_ut
00:04:21.643    LINK blobfs_sync_ut
00:04:21.901    LINK blobfs_async_ut
00:04:22.160    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:04:22.160    LINK scsi_ut
00:04:22.160    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:04:22.160    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:04:22.419    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:04:22.419    LINK lun_ut
00:04:22.419    LINK lvol_ut
00:04:22.419    LINK ftl_band_ut
00:04:22.986    CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o
00:04:22.986    CC test/unit/lib/nvmf/auth.c/auth_ut.o
00:04:22.986    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:04:23.919    LINK scsi_pr_ut
00:04:23.919    LINK ctrlr_bdev_ut
00:04:23.919    LINK subsystem_ut
00:04:23.919    LINK scsi_bdev_ut
00:04:24.177    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:04:24.177    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:04:24.177    LINK nvmf_ut
00:04:24.177    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:04:24.177    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:04:24.436    LINK ftl_bitmap_ut
00:04:24.694    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:04:24.952    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:04:24.952    LINK ftl_p2l_ut
00:04:24.952    LINK ctrlr_discovery_ut
00:04:25.255    LINK ftl_mempool_ut
00:04:25.255    LINK ctrlr_ut
00:04:25.255    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:04:25.533    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:04:25.533    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:04:25.791    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:04:25.791    LINK ftl_mngt_ut
00:04:26.050    LINK auth_ut
00:04:26.050    LINK init_grp_ut
00:04:26.307    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:04:26.307    LINK tcp_ut
00:04:26.307    CC test/unit/lib/iscsi/param.c/param_ut.o
00:04:26.566    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:04:26.824    LINK conn_ut
00:04:26.824    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:04:27.081    LINK param_ut
00:04:27.647    LINK ftl_layout_upgrade_ut
00:04:27.904    LINK ftl_sb_ut
00:04:28.163    LINK portal_grp_ut
00:04:28.420    LINK tgt_node_ut
00:04:28.987    LINK transport_ut
00:04:28.987    LINK rdma_ut
00:04:29.554    LINK vhost_ut
00:04:29.554    LINK iscsi_ut
00:04:29.812  
00:04:29.812  real	2m33.326s
00:04:29.812  user	12m14.617s
00:04:29.812  sys	3m16.325s
00:04:29.812   13:41:12 unittest_build -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:29.812   13:41:12 unittest_build -- common/autotest_common.sh@10 -- $ set +x
00:04:29.812  ************************************
00:04:29.812  END TEST unittest_build
00:04:29.812  ************************************
00:04:29.813   13:41:12  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:04:29.813   13:41:12  -- pm/common@29 -- $ signal_monitor_resources TERM
00:04:29.813   13:41:12  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:04:29.813   13:41:12  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:29.813   13:41:12  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:04:29.813   13:41:12  -- pm/common@44 -- $ pid=2424
00:04:29.813   13:41:12  -- pm/common@50 -- $ kill -TERM 2424
00:04:29.813   13:41:12  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:29.813   13:41:12  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:04:29.813   13:41:12  -- pm/common@44 -- $ pid=2426
00:04:29.813   13:41:12  -- pm/common@50 -- $ kill -TERM 2426
00:04:29.813   13:41:12  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:04:29.813   13:41:12  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:04:30.071    13:41:12  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:30.071     13:41:12  -- common/autotest_common.sh@1711 -- # lcov --version
00:04:30.071     13:41:12  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:30.071    13:41:12  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:30.071    13:41:12  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:30.071    13:41:12  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:30.071    13:41:12  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:30.071    13:41:12  -- scripts/common.sh@336 -- # IFS=.-:
00:04:30.071    13:41:12  -- scripts/common.sh@336 -- # read -ra ver1
00:04:30.071    13:41:12  -- scripts/common.sh@337 -- # IFS=.-:
00:04:30.071    13:41:12  -- scripts/common.sh@337 -- # read -ra ver2
00:04:30.071    13:41:12  -- scripts/common.sh@338 -- # local 'op=<'
00:04:30.071    13:41:12  -- scripts/common.sh@340 -- # ver1_l=2
00:04:30.071    13:41:12  -- scripts/common.sh@341 -- # ver2_l=1
00:04:30.071    13:41:12  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:30.071    13:41:12  -- scripts/common.sh@344 -- # case "$op" in
00:04:30.071    13:41:12  -- scripts/common.sh@345 -- # : 1
00:04:30.071    13:41:12  -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:30.071    13:41:12  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:30.071     13:41:12  -- scripts/common.sh@365 -- # decimal 1
00:04:30.071     13:41:12  -- scripts/common.sh@353 -- # local d=1
00:04:30.071     13:41:12  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:30.071     13:41:12  -- scripts/common.sh@355 -- # echo 1
00:04:30.071    13:41:12  -- scripts/common.sh@365 -- # ver1[v]=1
00:04:30.071     13:41:12  -- scripts/common.sh@366 -- # decimal 2
00:04:30.071     13:41:12  -- scripts/common.sh@353 -- # local d=2
00:04:30.071     13:41:12  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:30.071     13:41:12  -- scripts/common.sh@355 -- # echo 2
00:04:30.071    13:41:12  -- scripts/common.sh@366 -- # ver2[v]=2
00:04:30.071    13:41:12  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:30.071    13:41:12  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:30.071    13:41:12  -- scripts/common.sh@368 -- # return 0
00:04:30.071    13:41:12  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:30.071    13:41:12  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:30.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.071  		--rc genhtml_branch_coverage=1
00:04:30.071  		--rc genhtml_function_coverage=1
00:04:30.071  		--rc genhtml_legend=1
00:04:30.071  		--rc geninfo_all_blocks=1
00:04:30.071  		--rc geninfo_unexecuted_blocks=1
00:04:30.071  		
00:04:30.071  		'
00:04:30.071    13:41:12  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:30.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.071  		--rc genhtml_branch_coverage=1
00:04:30.071  		--rc genhtml_function_coverage=1
00:04:30.071  		--rc genhtml_legend=1
00:04:30.071  		--rc geninfo_all_blocks=1
00:04:30.071  		--rc geninfo_unexecuted_blocks=1
00:04:30.071  		
00:04:30.071  		'
00:04:30.071    13:41:12  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:30.071  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.071  		--rc genhtml_branch_coverage=1
00:04:30.072  		--rc genhtml_function_coverage=1
00:04:30.072  		--rc genhtml_legend=1
00:04:30.072  		--rc geninfo_all_blocks=1
00:04:30.072  		--rc geninfo_unexecuted_blocks=1
00:04:30.072  		
00:04:30.072  		'
00:04:30.072    13:41:12  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:30.072  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.072  		--rc genhtml_branch_coverage=1
00:04:30.072  		--rc genhtml_function_coverage=1
00:04:30.072  		--rc genhtml_legend=1
00:04:30.072  		--rc geninfo_all_blocks=1
00:04:30.072  		--rc geninfo_unexecuted_blocks=1
00:04:30.072  		
00:04:30.072  		'
00:04:30.072   13:41:12  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:30.072     13:41:12  -- nvmf/common.sh@7 -- # uname -s
00:04:30.072    13:41:12  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:30.072    13:41:12  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:30.072    13:41:12  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:30.072    13:41:12  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:30.072    13:41:12  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:30.072    13:41:12  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:30.072    13:41:12  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:30.072    13:41:12  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:30.072    13:41:12  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:30.072     13:41:12  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:30.072    13:41:12  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eab8dd07-70d5-4d8b-b2aa-3ed47cbc8689
00:04:30.072    13:41:12  -- nvmf/common.sh@18 -- # NVME_HOSTID=eab8dd07-70d5-4d8b-b2aa-3ed47cbc8689
00:04:30.072    13:41:12  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:30.072    13:41:12  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:30.072    13:41:12  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:30.072    13:41:12  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:30.072    13:41:12  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:30.072     13:41:12  -- scripts/common.sh@15 -- # shopt -s extglob
00:04:30.072     13:41:12  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:30.072     13:41:12  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:30.072     13:41:12  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:30.072      13:41:12  -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:30.072      13:41:12  -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:30.072      13:41:12  -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:30.072      13:41:12  -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:30.072      13:41:12  -- paths/export.sh@6 -- # export PATH
00:04:30.072      13:41:12  -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:30.072    13:41:12  -- nvmf/common.sh@51 -- # : 0
00:04:30.072    13:41:12  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:30.072    13:41:12  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:30.072    13:41:12  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:30.072    13:41:12  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:30.072    13:41:12  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:30.072    13:41:12  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:30.072  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:30.072    13:41:12  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:30.072    13:41:12  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:30.072    13:41:12  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:30.072   13:41:12  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:04:30.072    13:41:12  -- spdk/autotest.sh@32 -- # uname -s
00:04:30.072   13:41:12  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:04:30.072   13:41:12  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E'
00:04:30.072   13:41:12  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:04:30.072   13:41:12  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:04:30.072   13:41:12  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:04:30.072   13:41:12  -- spdk/autotest.sh@44 -- # modprobe nbd
00:04:30.072    13:41:12  -- spdk/autotest.sh@46 -- # type -P udevadm
00:04:30.072   13:41:12  -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm
00:04:30.330   13:41:12  -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property
00:04:30.330   13:41:12  -- spdk/autotest.sh@48 -- # udevadm_pid=62401
00:04:30.330   13:41:12  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:04:30.330   13:41:12  -- pm/common@17 -- # local monitor
00:04:30.330   13:41:12  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:30.330   13:41:12  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:30.330   13:41:12  -- pm/common@25 -- # sleep 1
00:04:30.330    13:41:12  -- pm/common@21 -- # date +%s
00:04:30.330    13:41:12  -- pm/common@21 -- # date +%s
00:04:30.330   13:41:12  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733924472
00:04:30.330   13:41:12  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733924472
00:04:30.330  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733924472_collect-vmstat.pm.log
00:04:30.330  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733924472_collect-cpu-load.pm.log
00:04:31.264   13:41:13  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:04:31.264   13:41:13  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:04:31.264   13:41:13  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:31.264   13:41:13  -- common/autotest_common.sh@10 -- # set +x
00:04:31.264   13:41:13  -- spdk/autotest.sh@59 -- # create_test_list
00:04:31.264   13:41:13  -- common/autotest_common.sh@752 -- # xtrace_disable
00:04:31.264   13:41:13  -- common/autotest_common.sh@10 -- # set +x
00:04:31.264     13:41:13  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:04:31.264    13:41:13  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:04:31.264   13:41:13  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:04:31.264   13:41:13  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:04:31.264   13:41:13  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:04:31.264   13:41:13  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:04:31.264    13:41:13  -- common/autotest_common.sh@1457 -- # uname
00:04:31.264   13:41:13  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:04:31.264   13:41:13  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:04:31.264    13:41:13  -- common/autotest_common.sh@1477 -- # uname
00:04:31.264   13:41:13  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:04:31.264   13:41:13  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:04:31.264   13:41:13  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:04:31.264  lcov: LCOV version 1.15
00:04:31.264   13:41:14  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:04:39.440  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:04:39.440  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:05:47.145   13:42:24  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:05:47.145   13:42:24  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:47.145   13:42:24  -- common/autotest_common.sh@10 -- # set +x
00:05:47.145   13:42:24  -- spdk/autotest.sh@78 -- # rm -f
00:05:47.145   13:42:24  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:47.145  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:05:47.145  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:05:47.145   13:42:24  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:05:47.145   13:42:24  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:05:47.145   13:42:24  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:05:47.145   13:42:24  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:05:47.145   13:42:24  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:05:47.145   13:42:24  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:05:47.145   13:42:24  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:05:47.145   13:42:24  -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:05:47.145   13:42:24  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:47.145   13:42:24  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:05:47.145   13:42:24  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:05:47.145   13:42:24  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:05:47.145   13:42:24  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:47.145   13:42:24  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:05:47.145   13:42:24  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:47.145   13:42:24  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:47.145   13:42:24  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:05:47.145   13:42:24  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:05:47.145   13:42:24  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:05:47.145  No valid GPT data, bailing
00:05:47.145    13:42:24  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:05:47.145   13:42:24  -- scripts/common.sh@394 -- # pt=
00:05:47.145   13:42:24  -- scripts/common.sh@395 -- # return 1
00:05:47.145   13:42:24  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:05:47.145  1+0 records in
00:05:47.145  1+0 records out
00:05:47.145  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428813 s, 245 MB/s
00:05:47.145   13:42:24  -- spdk/autotest.sh@105 -- # sync
00:05:47.145   13:42:24  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:05:47.145   13:42:24  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:05:47.145    13:42:24  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:05:47.145    13:42:26  -- spdk/autotest.sh@111 -- # uname -s
00:05:47.145   13:42:26  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:05:47.145   13:42:26  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:05:47.145   13:42:26  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:05:47.145  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:05:47.145  Hugepages
00:05:47.145  node     hugesize     free /  total
00:05:47.145  node0   1048576kB        0 /      0
00:05:47.145  node0      2048kB        0 /      0
00:05:47.145  
00:05:47.145  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:05:47.145  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:05:47.145  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:05:47.145    13:42:27  -- spdk/autotest.sh@117 -- # uname -s
00:05:47.145   13:42:27  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:05:47.145   13:42:27  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:05:47.145   13:42:27  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:47.145  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:05:47.145  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:05:47.145   13:42:28  -- common/autotest_common.sh@1517 -- # sleep 1
00:05:47.145   13:42:29  -- common/autotest_common.sh@1518 -- # bdfs=()
00:05:47.145   13:42:29  -- common/autotest_common.sh@1518 -- # local bdfs
00:05:47.145   13:42:29  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:05:47.145    13:42:29  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:05:47.145    13:42:29  -- common/autotest_common.sh@1498 -- # bdfs=()
00:05:47.145    13:42:29  -- common/autotest_common.sh@1498 -- # local bdfs
00:05:47.145    13:42:29  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:47.145     13:42:29  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:47.145     13:42:29  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:05:47.145    13:42:29  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:05:47.145    13:42:29  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:05:47.145   13:42:29  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:47.404  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:05:47.662  Waiting for block devices as requested
00:05:47.662  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:05:47.662   13:42:30  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:05:47.662    13:42:30  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:05:47.662     13:42:30  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:05:47.662     13:42:30  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:05:47.662    13:42:30  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0
00:05:47.662    13:42:30  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]]
00:05:47.662     13:42:30  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0
00:05:47.662    13:42:30  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:05:47.662   13:42:30  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:05:47.662   13:42:30  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:05:47.662    13:42:30  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:05:47.662    13:42:30  -- common/autotest_common.sh@1531 -- # grep oacs
00:05:47.662    13:42:30  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:05:47.662   13:42:30  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:05:47.662   13:42:30  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:05:47.662   13:42:30  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:05:47.662    13:42:30  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:05:47.662    13:42:30  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:05:47.662    13:42:30  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:47.663   13:42:30  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:05:47.663   13:42:30  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:05:47.663   13:42:30  -- common/autotest_common.sh@1543 -- # continue
00:05:47.663   13:42:30  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:05:47.663   13:42:30  -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:47.663   13:42:30  -- common/autotest_common.sh@10 -- # set +x
00:05:47.921   13:42:30  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:05:47.921   13:42:30  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:47.921   13:42:30  -- common/autotest_common.sh@10 -- # set +x
00:05:47.921   13:42:30  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:48.178  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:05:48.178  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:05:49.116   13:42:31  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:05:49.116   13:42:31  -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:49.116   13:42:31  -- common/autotest_common.sh@10 -- # set +x
00:05:49.116   13:42:31  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:05:49.116   13:42:31  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:05:49.116    13:42:31  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:05:49.116    13:42:31  -- common/autotest_common.sh@1563 -- # bdfs=()
00:05:49.116    13:42:31  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:05:49.116    13:42:31  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:05:49.116    13:42:31  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:05:49.116     13:42:31  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:05:49.116     13:42:31  -- common/autotest_common.sh@1498 -- # bdfs=()
00:05:49.116     13:42:31  -- common/autotest_common.sh@1498 -- # local bdfs
00:05:49.116     13:42:31  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:49.116      13:42:31  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:49.116      13:42:31  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:05:49.116     13:42:31  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:05:49.116     13:42:31  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:05:49.116    13:42:31  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:05:49.116     13:42:31  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:05:49.116    13:42:31  -- common/autotest_common.sh@1566 -- # device=0x0010
00:05:49.116    13:42:31  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:05:49.116    13:42:31  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:05:49.116    13:42:31  -- common/autotest_common.sh@1572 -- # return 0
00:05:49.116   13:42:31  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:05:49.116   13:42:31  -- common/autotest_common.sh@1580 -- # return 0
00:05:49.116   13:42:31  -- spdk/autotest.sh@137 -- # '[' 1 -eq 1 ']'
00:05:49.116   13:42:31  -- spdk/autotest.sh@138 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:49.116   13:42:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:49.116   13:42:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:49.116   13:42:31  -- common/autotest_common.sh@10 -- # set +x
00:05:49.116  ************************************
00:05:49.116  START TEST unittest
00:05:49.116  ************************************
00:05:49.116   13:42:31 unittest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:49.116  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:49.116  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit
00:05:49.376  + testdir=/home/vagrant/spdk_repo/spdk/test/unit
00:05:49.376  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:49.376  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../..
00:05:49.376  + rootdir=/home/vagrant/spdk_repo/spdk
00:05:49.376  + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:05:49.376  ++ rpc_py=rpc_cmd
00:05:49.376  ++ set -e
00:05:49.376  ++ shopt -s nullglob
00:05:49.376  ++ shopt -s extglob
00:05:49.376  ++ shopt -s inherit_errexit
00:05:49.376  ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:05:49.376  ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:05:49.376  ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:05:49.376  +++ CONFIG_WPDK_DIR=
00:05:49.376  +++ CONFIG_ASAN=y
00:05:49.376  +++ CONFIG_VBDEV_COMPRESS=n
00:05:49.376  +++ CONFIG_HAVE_EXECINFO_H=y
00:05:49.376  +++ CONFIG_USDT=n
00:05:49.376  +++ CONFIG_CUSTOMOCF=n
00:05:49.376  +++ CONFIG_PREFIX=/usr/local
00:05:49.376  +++ CONFIG_RBD=n
00:05:49.376  +++ CONFIG_LIBDIR=
00:05:49.376  +++ CONFIG_IDXD=y
00:05:49.376  +++ CONFIG_NVME_CUSE=y
00:05:49.376  +++ CONFIG_SMA=n
00:05:49.376  +++ CONFIG_VTUNE=n
00:05:49.376  +++ CONFIG_TSAN=n
00:05:49.376  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:05:49.376  +++ CONFIG_VFIO_USER_DIR=
00:05:49.376  +++ CONFIG_MAX_NUMA_NODES=1
00:05:49.376  +++ CONFIG_PGO_CAPTURE=n
00:05:49.376  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:05:49.376  +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:05:49.376  +++ CONFIG_LTO=n
00:05:49.376  +++ CONFIG_ISCSI_INITIATOR=y
00:05:49.376  +++ CONFIG_CET=n
00:05:49.376  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:05:49.376  +++ CONFIG_OCF_PATH=
00:05:49.376  +++ CONFIG_RDMA_SET_TOS=y
00:05:49.376  +++ CONFIG_AIO_FSDEV=y
00:05:49.376  +++ CONFIG_HAVE_ARC4RANDOM=y
00:05:49.376  +++ CONFIG_HAVE_LIBARCHIVE=n
00:05:49.376  +++ CONFIG_UBLK=y
00:05:49.376  +++ CONFIG_ISAL_CRYPTO=y
00:05:49.376  +++ CONFIG_OPENSSL_PATH=
00:05:49.376  +++ CONFIG_OCF=n
00:05:49.376  +++ CONFIG_FUSE=n
00:05:49.376  +++ CONFIG_VTUNE_DIR=
00:05:49.376  +++ CONFIG_FUZZER_LIB=
00:05:49.376  +++ CONFIG_FUZZER=n
00:05:49.376  +++ CONFIG_FSDEV=y
00:05:49.376  +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:05:49.376  +++ CONFIG_CRYPTO=n
00:05:49.376  +++ CONFIG_PGO_USE=n
00:05:49.376  +++ CONFIG_VHOST=y
00:05:49.376  +++ CONFIG_DAOS=n
00:05:49.376  +++ CONFIG_DPDK_INC_DIR=
00:05:49.376  +++ CONFIG_DAOS_DIR=
00:05:49.376  +++ CONFIG_UNIT_TESTS=y
00:05:49.376  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:05:49.376  +++ CONFIG_VIRTIO=y
00:05:49.376  +++ CONFIG_DPDK_UADK=n
00:05:49.376  +++ CONFIG_COVERAGE=y
00:05:49.376  +++ CONFIG_RDMA=y
00:05:49.376  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:05:49.376  +++ CONFIG_HAVE_LZ4=n
00:05:49.376  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:05:49.376  +++ CONFIG_URING_PATH=
00:05:49.376  +++ CONFIG_XNVME=n
00:05:49.376  +++ CONFIG_VFIO_USER=n
00:05:49.376  +++ CONFIG_ARCH=native
00:05:49.376  +++ CONFIG_HAVE_EVP_MAC=y
00:05:49.376  +++ CONFIG_URING_ZNS=n
00:05:49.376  +++ CONFIG_WERROR=y
00:05:49.376  +++ CONFIG_HAVE_LIBBSD=n
00:05:49.376  +++ CONFIG_UBSAN=y
00:05:49.376  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:05:49.376  +++ CONFIG_IPSEC_MB_DIR=
00:05:49.376  +++ CONFIG_GOLANG=n
00:05:49.376  +++ CONFIG_ISAL=y
00:05:49.376  +++ CONFIG_IDXD_KERNEL=y
00:05:49.376  +++ CONFIG_DPDK_LIB_DIR=
00:05:49.376  +++ CONFIG_RDMA_PROV=verbs
00:05:49.376  +++ CONFIG_APPS=y
00:05:49.376  +++ CONFIG_SHARED=n
00:05:49.376  +++ CONFIG_HAVE_KEYUTILS=y
00:05:49.376  +++ CONFIG_FC_PATH=
00:05:49.376  +++ CONFIG_DPDK_PKG_CONFIG=n
00:05:49.376  +++ CONFIG_FC=n
00:05:49.376  +++ CONFIG_AVAHI=n
00:05:49.376  +++ CONFIG_FIO_PLUGIN=y
00:05:49.376  +++ CONFIG_RAID5F=n
00:05:49.376  +++ CONFIG_EXAMPLES=y
00:05:49.376  +++ CONFIG_TESTS=y
00:05:49.376  +++ CONFIG_CRYPTO_MLX5=n
00:05:49.376  +++ CONFIG_MAX_LCORES=128
00:05:49.376  +++ CONFIG_IPSEC_MB=n
00:05:49.376  +++ CONFIG_PGO_DIR=
00:05:49.376  +++ CONFIG_DEBUG=y
00:05:49.376  +++ CONFIG_DPDK_COMPRESSDEV=n
00:05:49.376  +++ CONFIG_CROSS_PREFIX=
00:05:49.376  +++ CONFIG_COPY_FILE_RANGE=y
00:05:49.376  +++ CONFIG_URING=n
00:05:49.376  ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:05:49.376  +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:05:49.376  ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:05:49.376  +++ _root=/home/vagrant/spdk_repo/spdk/test/common
00:05:49.376  +++ _root=/home/vagrant/spdk_repo/spdk
00:05:49.376  +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:05:49.376  +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:05:49.376  +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:05:49.376  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:05:49.376  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:05:49.376  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:05:49.376  +++ VHOST_APP=("$_app_dir/vhost")
00:05:49.376  +++ DD_APP=("$_app_dir/spdk_dd")
00:05:49.376  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:05:49.376  +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:05:49.376  +++ [[ #ifndef SPDK_CONFIG_H
00:05:49.376  #define SPDK_CONFIG_H
00:05:49.376  #define SPDK_CONFIG_AIO_FSDEV 1
00:05:49.376  #define SPDK_CONFIG_APPS 1
00:05:49.376  #define SPDK_CONFIG_ARCH native
00:05:49.376  #define SPDK_CONFIG_ASAN 1
00:05:49.376  #undef SPDK_CONFIG_AVAHI
00:05:49.376  #undef SPDK_CONFIG_CET
00:05:49.376  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:05:49.376  #define SPDK_CONFIG_COVERAGE 1
00:05:49.376  #define SPDK_CONFIG_CROSS_PREFIX 
00:05:49.376  #undef SPDK_CONFIG_CRYPTO
00:05:49.376  #undef SPDK_CONFIG_CRYPTO_MLX5
00:05:49.376  #undef SPDK_CONFIG_CUSTOMOCF
00:05:49.376  #undef SPDK_CONFIG_DAOS
00:05:49.376  #define SPDK_CONFIG_DAOS_DIR 
00:05:49.376  #define SPDK_CONFIG_DEBUG 1
00:05:49.376  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:05:49.376  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:05:49.376  #define SPDK_CONFIG_DPDK_INC_DIR 
00:05:49.376  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:05:49.376  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:05:49.376  #undef SPDK_CONFIG_DPDK_UADK
00:05:49.376  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:05:49.376  #define SPDK_CONFIG_EXAMPLES 1
00:05:49.376  #undef SPDK_CONFIG_FC
00:05:49.376  #define SPDK_CONFIG_FC_PATH 
00:05:49.377  #define SPDK_CONFIG_FIO_PLUGIN 1
00:05:49.377  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:05:49.377  #define SPDK_CONFIG_FSDEV 1
00:05:49.377  #undef SPDK_CONFIG_FUSE
00:05:49.377  #undef SPDK_CONFIG_FUZZER
00:05:49.377  #define SPDK_CONFIG_FUZZER_LIB 
00:05:49.377  #undef SPDK_CONFIG_GOLANG
00:05:49.377  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:05:49.377  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:05:49.377  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:05:49.377  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:05:49.377  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:05:49.377  #undef SPDK_CONFIG_HAVE_LIBBSD
00:05:49.377  #undef SPDK_CONFIG_HAVE_LZ4
00:05:49.377  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:05:49.377  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:05:49.377  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:05:49.377  #define SPDK_CONFIG_IDXD 1
00:05:49.377  #define SPDK_CONFIG_IDXD_KERNEL 1
00:05:49.377  #undef SPDK_CONFIG_IPSEC_MB
00:05:49.377  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:05:49.377  #define SPDK_CONFIG_ISAL 1
00:05:49.377  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:05:49.377  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:05:49.377  #define SPDK_CONFIG_LIBDIR 
00:05:49.377  #undef SPDK_CONFIG_LTO
00:05:49.377  #define SPDK_CONFIG_MAX_LCORES 128
00:05:49.377  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:05:49.377  #define SPDK_CONFIG_NVME_CUSE 1
00:05:49.377  #undef SPDK_CONFIG_OCF
00:05:49.377  #define SPDK_CONFIG_OCF_PATH 
00:05:49.377  #define SPDK_CONFIG_OPENSSL_PATH 
00:05:49.377  #undef SPDK_CONFIG_PGO_CAPTURE
00:05:49.377  #define SPDK_CONFIG_PGO_DIR 
00:05:49.377  #undef SPDK_CONFIG_PGO_USE
00:05:49.377  #define SPDK_CONFIG_PREFIX /usr/local
00:05:49.377  #undef SPDK_CONFIG_RAID5F
00:05:49.377  #undef SPDK_CONFIG_RBD
00:05:49.377  #define SPDK_CONFIG_RDMA 1
00:05:49.377  #define SPDK_CONFIG_RDMA_PROV verbs
00:05:49.377  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:05:49.377  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:05:49.377  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:05:49.377  #undef SPDK_CONFIG_SHARED
00:05:49.377  #undef SPDK_CONFIG_SMA
00:05:49.377  #define SPDK_CONFIG_TESTS 1
00:05:49.377  #undef SPDK_CONFIG_TSAN
00:05:49.377  #define SPDK_CONFIG_UBLK 1
00:05:49.377  #define SPDK_CONFIG_UBSAN 1
00:05:49.377  #define SPDK_CONFIG_UNIT_TESTS 1
00:05:49.377  #undef SPDK_CONFIG_URING
00:05:49.377  #define SPDK_CONFIG_URING_PATH 
00:05:49.377  #undef SPDK_CONFIG_URING_ZNS
00:05:49.377  #undef SPDK_CONFIG_USDT
00:05:49.377  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:05:49.377  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:05:49.377  #undef SPDK_CONFIG_VFIO_USER
00:05:49.377  #define SPDK_CONFIG_VFIO_USER_DIR 
00:05:49.377  #define SPDK_CONFIG_VHOST 1
00:05:49.377  #define SPDK_CONFIG_VIRTIO 1
00:05:49.377  #undef SPDK_CONFIG_VTUNE
00:05:49.377  #define SPDK_CONFIG_VTUNE_DIR 
00:05:49.377  #define SPDK_CONFIG_WERROR 1
00:05:49.377  #define SPDK_CONFIG_WPDK_DIR 
00:05:49.377  #undef SPDK_CONFIG_XNVME
00:05:49.377  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:05:49.377  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:05:49.377  ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:49.377  +++ shopt -s extglob
00:05:49.377  +++ [[ -e /bin/wpdk_common.sh ]]
00:05:49.377  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:49.377  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:49.377  ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:49.377  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:49.377  ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:49.377  ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:49.377  ++++ export PATH
00:05:49.377  ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:49.377  ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:05:49.377  +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:05:49.377  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:05:49.377  +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:05:49.377  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:05:49.377  +++ _pmrootdir=/home/vagrant/spdk_repo/spdk
00:05:49.377  +++ TEST_TAG=N/A
00:05:49.377  +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:05:49.377  +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:05:49.377  ++++ uname -s
00:05:49.377  +++ PM_OS=Linux
00:05:49.377  +++ MONITOR_RESOURCES_SUDO=()
00:05:49.377  +++ declare -A MONITOR_RESOURCES_SUDO
00:05:49.377  +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:05:49.377  +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:05:49.377  +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:05:49.377  +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:05:49.377  +++ SUDO[0]=
00:05:49.377  +++ SUDO[1]='sudo -E'
00:05:49.377  +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:05:49.377  +++ [[ Linux == FreeBSD ]]
00:05:49.377  +++ [[ Linux == Linux ]]
00:05:49.377  +++ [[ QEMU != QEMU ]]
00:05:49.377  +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:05:49.377  ++ : 0
00:05:49.377  ++ export RUN_NIGHTLY
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_RUN_VALGRIND
00:05:49.377  ++ : 1
00:05:49.377  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:05:49.377  ++ : 1
00:05:49.377  ++ export SPDK_TEST_UNITTEST
00:05:49.377  ++ :
00:05:49.377  ++ export SPDK_TEST_AUTOBUILD
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_RELEASE_BUILD
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_ISAL
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_ISCSI
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_ISCSI_INITIATOR
00:05:49.377  ++ : 1
00:05:49.377  ++ export SPDK_TEST_NVME
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVME_PMR
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVME_BP
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVME_CLI
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVME_CUSE
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVME_FDP
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVMF
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_VFIOUSER
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_VFIOUSER_QEMU
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_FUZZER
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_FUZZER_SHORT
00:05:49.377  ++ : rdma
00:05:49.377  ++ export SPDK_TEST_NVMF_TRANSPORT
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_RBD
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_VHOST
00:05:49.377  ++ : 1
00:05:49.377  ++ export SPDK_TEST_BLOCKDEV
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_RAID
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_IOAT
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_BLOBFS
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_VHOST_INIT
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_LVOL
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_VBDEV_COMPRESS
00:05:49.377  ++ : 1
00:05:49.377  ++ export SPDK_RUN_ASAN
00:05:49.377  ++ : 1
00:05:49.377  ++ export SPDK_RUN_UBSAN
00:05:49.377  ++ :
00:05:49.377  ++ export SPDK_RUN_EXTERNAL_DPDK
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_RUN_NON_ROOT
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_CRYPTO
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_FTL
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_OCF
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_VMD
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_OPAL
00:05:49.377  ++ :
00:05:49.377  ++ export SPDK_TEST_NATIVE_DPDK
00:05:49.377  ++ : true
00:05:49.377  ++ export SPDK_AUTOTEST_X
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_URING
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_USDT
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_USE_IGB_UIO
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_SCHEDULER
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_SCANBUILD
00:05:49.377  ++ :
00:05:49.377  ++ export SPDK_TEST_NVMF_NICS
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_SMA
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_DAOS
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_XNVME
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_ACCEL
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_ACCEL_DSA
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_ACCEL_IAA
00:05:49.377  ++ :
00:05:49.377  ++ export SPDK_TEST_FUZZER_TARGET
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVMF_MDNS
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_JSONRPC_GO_CLIENT
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_SETUP
00:05:49.377  ++ : 0
00:05:49.377  ++ export SPDK_TEST_NVME_INTERRUPT
00:05:49.377  ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:05:49.377  ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:05:49.377  ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:05:49.377  ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:05:49.377  ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:49.378  ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:49.378  ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:49.378  ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:49.378  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:05:49.378  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:05:49.378  ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:49.378  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:49.378  ++ export PYTHONDONTWRITEBYTECODE=1
00:05:49.378  ++ PYTHONDONTWRITEBYTECODE=1
00:05:49.378  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:05:49.378  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:05:49.378  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:05:49.378  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:05:49.378  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:05:49.378  ++ rm -rf /var/tmp/asan_suppression_file
00:05:49.378  ++ cat
00:05:49.378  ++ echo leak:libfuse3.so
00:05:49.378  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:05:49.378  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:05:49.378  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:05:49.378  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:05:49.378  ++ '[' -z /var/spdk/dependencies ']'
00:05:49.378  ++ export DEPENDENCY_DIR
00:05:49.378  ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:05:49.378  ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:05:49.378  ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:05:49.378  ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:05:49.378  ++ export QEMU_BIN=
00:05:49.378  ++ QEMU_BIN=
00:05:49.378  ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:05:49.378  ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:05:49.378  ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:05:49.378  ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:05:49.378  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:05:49.378  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:05:49.378  ++ _LCOV_MAIN=0
00:05:49.378  ++ _LCOV_LLVM=1
00:05:49.378  ++ _LCOV=
00:05:49.378  ++ [[ '' == *clang* ]]
00:05:49.378  ++ [[ 0 -eq 1 ]]
00:05:49.378  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:05:49.378  ++ _lcov_opt[_LCOV_MAIN]=
00:05:49.378  ++ lcov_opt=
00:05:49.378  ++ '[' 0 -eq 0 ']'
00:05:49.378  ++ export valgrind=
00:05:49.378  ++ valgrind=
00:05:49.378  +++ uname -s
00:05:49.378  ++ '[' Linux = Linux ']'
00:05:49.378  ++ HUGEMEM=4096
00:05:49.378  ++ export CLEAR_HUGE=yes
00:05:49.378  ++ CLEAR_HUGE=yes
00:05:49.378  ++ MAKE=make
00:05:49.378  +++ nproc
00:05:49.378  ++ MAKEFLAGS=-j10
00:05:49.378  ++ export HUGEMEM=4096
00:05:49.378  ++ HUGEMEM=4096
00:05:49.378  ++ NO_HUGE=()
00:05:49.378  ++ TEST_MODE=
00:05:49.378  ++ [[ -z '' ]]
00:05:49.378  ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:05:49.378  ++ exec
00:05:49.378  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:05:49.378  ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server
00:05:49.378  ++ set_test_storage 2147483648
00:05:49.378  ++ [[ -v testdir ]]
00:05:49.378  ++ local requested_size=2147483648
00:05:49.378  ++ local mount target_dir
00:05:49.378  ++ local -A mounts fss sizes avails uses
00:05:49.378  ++ local source fs size avail mount use
00:05:49.378  ++ local storage_fallback storage_candidates
00:05:49.378  +++ mktemp -udt spdk.XXXXXX
00:05:49.378  ++ storage_fallback=/tmp/spdk.t0Y3Hg
00:05:49.378  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:05:49.378  ++ [[ -n '' ]]
00:05:49.378  ++ [[ -n '' ]]
00:05:49.378  ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.t0Y3Hg/tests/unit /tmp/spdk.t0Y3Hg
00:05:49.378  ++ requested_size=2214592512
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  +++ df -T
00:05:49.378  +++ grep -v Filesystem
00:05:49.378  ++ mounts["$mount"]=tmpfs
00:05:49.378  ++ fss["$mount"]=tmpfs
00:05:49.378  ++ avails["$mount"]=1252958208
00:05:49.378  ++ sizes["$mount"]=1254027264
00:05:49.378  ++ uses["$mount"]=1069056
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=/dev/vda1
00:05:49.378  ++ fss["$mount"]=ext4
00:05:49.378  ++ avails["$mount"]=9663799296
00:05:49.378  ++ sizes["$mount"]=19681529856
00:05:49.378  ++ uses["$mount"]=10000953344
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=tmpfs
00:05:49.378  ++ fss["$mount"]=tmpfs
00:05:49.378  ++ avails["$mount"]=6270115840
00:05:49.378  ++ sizes["$mount"]=6270115840
00:05:49.378  ++ uses["$mount"]=0
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=tmpfs
00:05:49.378  ++ fss["$mount"]=tmpfs
00:05:49.378  ++ avails["$mount"]=5242880
00:05:49.378  ++ sizes["$mount"]=5242880
00:05:49.378  ++ uses["$mount"]=0
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=/dev/vda16
00:05:49.378  ++ fss["$mount"]=ext4
00:05:49.378  ++ avails["$mount"]=777306112
00:05:49.378  ++ sizes["$mount"]=923156480
00:05:49.378  ++ uses["$mount"]=81207296
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=/dev/vda15
00:05:49.378  ++ fss["$mount"]=vfat
00:05:49.378  ++ avails["$mount"]=103000064
00:05:49.378  ++ sizes["$mount"]=109395968
00:05:49.378  ++ uses["$mount"]=6395904
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=tmpfs
00:05:49.378  ++ fss["$mount"]=tmpfs
00:05:49.378  ++ avails["$mount"]=1254010880
00:05:49.378  ++ sizes["$mount"]=1254023168
00:05:49.378  ++ uses["$mount"]=12288
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output
00:05:49.378  ++ fss["$mount"]=fuse.sshfs
00:05:49.378  ++ avails["$mount"]=92273020928
00:05:49.378  ++ sizes["$mount"]=105088212992
00:05:49.378  ++ uses["$mount"]=7429758976
00:05:49.378  ++ read -r source fs size use avail _ mount
00:05:49.378  ++ printf '* Looking for test storage...\n'
00:05:49.378  * Looking for test storage...
00:05:49.378  ++ local target_space new_size
00:05:49.378  ++ for target_dir in "${storage_candidates[@]}"
00:05:49.378  +++ df /home/vagrant/spdk_repo/spdk/test/unit
00:05:49.378  +++ awk '$1 !~ /Filesystem/{print $6}'
00:05:49.378  ++ mount=/
00:05:49.378  ++ target_space=9663799296
00:05:49.378  ++ (( target_space == 0 || target_space < requested_size ))
00:05:49.378  ++ (( target_space >= requested_size ))
00:05:49.378  ++ [[ ext4 == tmpfs ]]
00:05:49.378  ++ [[ ext4 == ramfs ]]
00:05:49.378  ++ [[ / == / ]]
00:05:49.378  ++ new_size=12215545856
00:05:49.378  ++ (( new_size * 100 / sizes[/] > 95 ))
00:05:49.378  ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:05:49.378  ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:05:49.378  ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit
00:05:49.378  * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit
00:05:49.378  ++ return 0
00:05:49.378  ++ set -o errtrace
00:05:49.378  ++ shopt -s extdebug
00:05:49.378  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:05:49.378  ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@1703 -- # true
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@1705 -- # xtrace_fd
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@29 -- # exec
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@31 -- # xtrace_restore
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@18 -- # set -x
00:05:49.378    13:42:31 unittest -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:49.378     13:42:31 unittest -- common/autotest_common.sh@1711 -- # lcov --version
00:05:49.378     13:42:31 unittest -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:49.378    13:42:32 unittest -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:49.378    13:42:32 unittest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:49.378    13:42:32 unittest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:49.378    13:42:32 unittest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:49.378    13:42:32 unittest -- scripts/common.sh@336 -- # IFS=.-:
00:05:49.378    13:42:32 unittest -- scripts/common.sh@336 -- # read -ra ver1
00:05:49.378    13:42:32 unittest -- scripts/common.sh@337 -- # IFS=.-:
00:05:49.378    13:42:32 unittest -- scripts/common.sh@337 -- # read -ra ver2
00:05:49.378    13:42:32 unittest -- scripts/common.sh@338 -- # local 'op=<'
00:05:49.378    13:42:32 unittest -- scripts/common.sh@340 -- # ver1_l=2
00:05:49.378    13:42:32 unittest -- scripts/common.sh@341 -- # ver2_l=1
00:05:49.378    13:42:32 unittest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:49.378    13:42:32 unittest -- scripts/common.sh@344 -- # case "$op" in
00:05:49.378    13:42:32 unittest -- scripts/common.sh@345 -- # : 1
00:05:49.378    13:42:32 unittest -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:49.378    13:42:32 unittest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:49.378     13:42:32 unittest -- scripts/common.sh@365 -- # decimal 1
00:05:49.378     13:42:32 unittest -- scripts/common.sh@353 -- # local d=1
00:05:49.378     13:42:32 unittest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:49.378     13:42:32 unittest -- scripts/common.sh@355 -- # echo 1
00:05:49.378    13:42:32 unittest -- scripts/common.sh@365 -- # ver1[v]=1
00:05:49.378     13:42:32 unittest -- scripts/common.sh@366 -- # decimal 2
00:05:49.378     13:42:32 unittest -- scripts/common.sh@353 -- # local d=2
00:05:49.378     13:42:32 unittest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:49.378     13:42:32 unittest -- scripts/common.sh@355 -- # echo 2
00:05:49.378    13:42:32 unittest -- scripts/common.sh@366 -- # ver2[v]=2
00:05:49.378    13:42:32 unittest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:49.378    13:42:32 unittest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:49.378    13:42:32 unittest -- scripts/common.sh@368 -- # return 0
00:05:49.379    13:42:32 unittest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:49.379    13:42:32 unittest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:49.379  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:49.379  		--rc genhtml_branch_coverage=1
00:05:49.379  		--rc genhtml_function_coverage=1
00:05:49.379  		--rc genhtml_legend=1
00:05:49.379  		--rc geninfo_all_blocks=1
00:05:49.379  		--rc geninfo_unexecuted_blocks=1
00:05:49.379  		
00:05:49.379  		'
00:05:49.379    13:42:32 unittest -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:49.379  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:49.379  		--rc genhtml_branch_coverage=1
00:05:49.379  		--rc genhtml_function_coverage=1
00:05:49.379  		--rc genhtml_legend=1
00:05:49.379  		--rc geninfo_all_blocks=1
00:05:49.379  		--rc geninfo_unexecuted_blocks=1
00:05:49.379  		
00:05:49.379  		'
00:05:49.379    13:42:32 unittest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:49.379  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:49.379  		--rc genhtml_branch_coverage=1
00:05:49.379  		--rc genhtml_function_coverage=1
00:05:49.379  		--rc genhtml_legend=1
00:05:49.379  		--rc geninfo_all_blocks=1
00:05:49.379  		--rc geninfo_unexecuted_blocks=1
00:05:49.379  		
00:05:49.379  		'
00:05:49.379    13:42:32 unittest -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:49.379  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:49.379  		--rc genhtml_branch_coverage=1
00:05:49.379  		--rc genhtml_function_coverage=1
00:05:49.379  		--rc genhtml_legend=1
00:05:49.379  		--rc geninfo_all_blocks=1
00:05:49.379  		--rc geninfo_unexecuted_blocks=1
00:05:49.379  		
00:05:49.379  		'
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@159 -- # '[' 0 -eq 1 ']'
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@166 -- # '[' -z x ']'
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@173 -- # '[' 0 -eq 1 ']'
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@182 -- # [[ y == y ]]
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@183 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@184 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:05:49.379   13:42:32 unittest -- unit/unittest.sh@186 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info
00:05:57.487  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:05:57.487  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:07:05.217    13:43:42 unittest -- unit/unittest.sh@190 -- # uname -m
00:07:05.217   13:43:42 unittest -- unit/unittest.sh@190 -- # '[' x86_64 = aarch64 ']'
00:07:05.217   13:43:42 unittest -- unit/unittest.sh@194 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:07:05.217   13:43:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:05.217   13:43:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:05.217   13:43:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:05.217  ************************************
00:07:05.217  START TEST unittest_pci_event
00:07:05.217  ************************************
00:07:05.217   13:43:42 unittest.unittest_pci_event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:07:05.217  
00:07:05.217  
00:07:05.217       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.217       http://cunit.sourceforge.net/
00:07:05.217  
00:07:05.217  
00:07:05.217  Suite: pci_event
00:07:05.217    Test: test_pci_parse_event ...[2024-12-11 13:43:42.997748] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000
00:07:05.217  [2024-12-11 13:43:42.998307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000
00:07:05.217  passed
00:07:05.217  
00:07:05.217  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.217                suites      1      1    n/a      0        0
00:07:05.217                 tests      1      1      1      0        0
00:07:05.217               asserts     15     15     15      0      n/a
00:07:05.217  
00:07:05.217  Elapsed time =    0.001 seconds
00:07:05.217  
00:07:05.217  real	0m0.050s
00:07:05.217  user	0m0.022s
00:07:05.217  sys	0m0.020s
00:07:05.217   13:43:43 unittest.unittest_pci_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:05.217   13:43:43 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x
00:07:05.217  ************************************
00:07:05.217  END TEST unittest_pci_event
00:07:05.217  ************************************
00:07:05.217   13:43:43 unittest -- unit/unittest.sh@195 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:07:05.217   13:43:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:05.217   13:43:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:05.217   13:43:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:05.217  ************************************
00:07:05.217  START TEST unittest_include
00:07:05.217  ************************************
00:07:05.217   13:43:43 unittest.unittest_include -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:07:05.217  
00:07:05.218  
00:07:05.218       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.218       http://cunit.sourceforge.net/
00:07:05.218  
00:07:05.218  
00:07:05.218  Suite: histogram
00:07:05.218    Test: histogram_test ...passed
00:07:05.218    Test: histogram_merge ...passed
00:07:05.218  
00:07:05.218  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.218                suites      1      1    n/a      0        0
00:07:05.218                 tests      2      2      2      0        0
00:07:05.218               asserts     50     50     50      0      n/a
00:07:05.218  
00:07:05.218  Elapsed time =    0.006 seconds
00:07:05.218  
00:07:05.218  real	0m0.038s
00:07:05.218  user	0m0.021s
00:07:05.218  sys	0m0.017s
00:07:05.218   13:43:43 unittest.unittest_include -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:05.218  ************************************
00:07:05.218  END TEST unittest_include
00:07:05.218   13:43:43 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x
00:07:05.218  ************************************
00:07:05.218   13:43:43 unittest -- unit/unittest.sh@196 -- # run_test unittest_bdev unittest_bdev
00:07:05.218   13:43:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:05.218   13:43:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:05.218   13:43:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:05.218  ************************************
00:07:05.218  START TEST unittest_bdev
00:07:05.218  ************************************
00:07:05.218   13:43:43 unittest.unittest_bdev -- common/autotest_common.sh@1129 -- # unittest_bdev
00:07:05.218   13:43:43 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut
00:07:05.218  
00:07:05.218  
00:07:05.218       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.218       http://cunit.sourceforge.net/
00:07:05.218  
00:07:05.218  
00:07:05.218  Suite: bdev
00:07:05.218    Test: bytes_to_blocks_test ...passed
00:07:05.218    Test: num_blocks_test ...passed
00:07:05.218    Test: io_valid_test ...passed
00:07:05.218    Test: open_write_test ...[2024-12-11 13:43:43.230946] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8538:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut
00:07:05.218  [2024-12-11 13:43:43.231231] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8538:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut
00:07:05.218  [2024-12-11 13:43:43.231318] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8538:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut
00:07:05.218  passed
00:07:05.218    Test: claim_test ...passed
00:07:05.218    Test: alias_add_del_test ...[2024-12-11 13:43:43.295886] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4957:bdev_name_add: *ERROR*: Bdev name bdev0 already exists
00:07:05.218  [2024-12-11 13:43:43.295947] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4987:spdk_bdev_alias_add: *ERROR*: Empty alias passed
00:07:05.218  [2024-12-11 13:43:43.295983] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4957:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists
00:07:05.218  passed
00:07:05.218    Test: get_device_stat_test ...passed
00:07:05.218    Test: bdev_io_types_test ...passed
00:07:05.218    Test: bdev_io_wait_test ...passed
00:07:05.218    Test: bdev_io_spans_split_test ...passed
00:07:05.218    Test: bdev_io_boundary_split_test ...passed
00:07:05.218    Test: bdev_io_max_size_and_segment_split_test ...[2024-12-11 13:43:43.425917] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3415:_bdev_rw_split: *ERROR*: The first child io was less than a block size
00:07:05.218  passed
00:07:05.218    Test: bdev_io_mix_split_test ...passed
00:07:05.218    Test: bdev_io_split_with_io_wait ...passed
00:07:05.218    Test: bdev_io_write_unit_split_test ...[2024-12-11 13:43:43.523146] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2956:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:07:05.218  [2024-12-11 13:43:43.523256] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2956:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:07:05.218  [2024-12-11 13:43:43.523279] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2956:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32
00:07:05.218  [2024-12-11 13:43:43.523351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2956:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64
00:07:05.218  passed
00:07:05.218    Test: bdev_io_alignment_with_boundary ...[2024-12-11 13:43:43.557303] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8185, expected 8192
00:07:05.218  [2024-12-11 13:43:43.557382] iobuf.c: 195:iobuf_node_free: *ERROR*: large iobuf pool count is 1015, expected 1024
00:07:05.218  passed
00:07:05.218    Test: bdev_io_alignment ...[2024-12-11 13:43:43.589323] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8184, expected 8192
00:07:05.218  passed
00:07:05.218    Test: bdev_histograms ...passed
00:07:05.218    Test: bdev_write_zeroes ...passed
00:07:05.218    Test: bdev_compare_and_write ...[2024-12-11 13:43:43.691747] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8190, expected 8192
00:07:05.218  passed
00:07:05.218    Test: bdev_compare ...passed
00:07:05.218    Test: bdev_compare_emulated ...[2024-12-11 13:43:43.789197] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8188, expected 8192
00:07:05.218  [2024-12-11 13:43:43.820106] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8187, expected 8192
00:07:05.218  passed
00:07:05.218    Test: bdev_zcopy_write ...passed
00:07:05.218    Test: bdev_zcopy_read ...passed
00:07:05.218    Test: bdev_open_while_hotremove ...passed
00:07:05.218    Test: bdev_close_while_hotremove ...passed
00:07:05.218    Test: bdev_open_ext_test ...passed
00:07:05.218    Test: bdev_open_ext_unregister ...[2024-12-11 13:43:43.905893] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8725:spdk_bdev_open_ext_v2: *ERROR*: Missing event callback function
00:07:05.218  [2024-12-11 13:43:43.906080] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8725:spdk_bdev_open_ext_v2: *ERROR*: Missing event callback function
00:07:05.218  passed
00:07:05.218    Test: bdev_set_io_timeout ...passed
00:07:05.218    Test: bdev_set_qd_sampling ...passed
00:07:05.218    Test: lba_range_overlap ...passed
00:07:05.218    Test: lock_lba_range_check_ranges ...passed
00:07:05.218    Test: lock_lba_range_with_io_outstanding ...passed
00:07:05.218    Test: lock_lba_range_overlapped ...passed
00:07:05.218    Test: bdev_quiesce ...[2024-12-11 13:43:44.072116] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10711:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found.
00:07:05.218  passed
00:07:05.218    Test: bdev_io_abort ...passed
00:07:05.218    Test: bdev_unmap ...passed
00:07:05.218    Test: bdev_write_zeroes_split_test ...passed
00:07:05.218    Test: bdev_set_options_test ...passed
00:07:05.218    Test: bdev_get_memory_domains ...passed
00:07:05.218    Test: bdev_io_ext ...[2024-12-11 13:43:44.193574] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 512:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:07:05.218  passed
00:07:05.218    Test: bdev_io_ext_no_opts ...passed
00:07:05.218    Test: bdev_io_ext_invalid_opts ...passed
00:07:05.218    Test: bdev_io_ext_split ...passed
00:07:05.218    Test: bdev_io_ext_bounce_buffer ...[2024-12-11 13:43:44.326592] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8188, expected 8192
00:07:05.218  passed
00:07:05.218    Test: bdev_register_uuid_alias ...[2024-12-11 13:43:44.357542] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4957:bdev_name_add: *ERROR*: Bdev name d0e85091-29fc-4a7b-8dbb-454ecd42a8d2 already exists
00:07:05.218  [2024-12-11 13:43:44.357621] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:d0e85091-29fc-4a7b-8dbb-454ecd42a8d2 alias for bdev bdev0
00:07:05.218  passed
00:07:05.218    Test: bdev_unregister_by_name ...[2024-12-11 13:43:44.382523] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8434:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1
00:07:05.218  [2024-12-11 13:43:44.382588] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8442:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module.
00:07:05.218  passed
00:07:05.218    Test: for_each_bdev_test ...passed
00:07:05.218    Test: bdev_seek_test ...passed
00:07:05.218    Test: bdev_copy ...[2024-12-11 13:43:44.424070] iobuf.c: 195:iobuf_node_free: *ERROR*: large iobuf pool count is 1023, expected 1024
00:07:05.218  passed
00:07:05.218    Test: bdev_copy_split_test ...[2024-12-11 13:43:44.455813] iobuf.c: 190:iobuf_node_free: *ERROR*: small iobuf pool count is 8190, expected 8192
00:07:05.218  passed
00:07:05.218    Test: examine_locks ...passed
00:07:05.218    Test: claim_v2_rwo ...[2024-12-11 13:43:44.481204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8538:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481274] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9266:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:05.218  passed
00:07:05.218    Test: claim_v2_rom ...[2024-12-11 13:43:44.481293] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481313] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481331] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9103:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481364] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9261:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims
00:07:05.218  [2024-12-11 13:43:44.481499] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8538:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481539] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481552] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9103:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:05.218  passed
00:07:05.218    Test: claim_v2_rwm ...[2024-12-11 13:43:44.481598] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9304:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims
00:07:05.218  [2024-12-11 13:43:44.481620] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9299:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:05.218  [2024-12-11 13:43:44.481733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9334:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:07:05.218  [2024-12-11 13:43:44.481771] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8538:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:05.218  passed
00:07:05.218    Test: claim_v2_existing_writer ...[2024-12-11 13:43:44.481794] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481808] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:05.218  [2024-12-11 13:43:44.481824] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9103:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.481838] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9354:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.481870] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9334:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:07:05.219  passed
00:07:05.219    Test: claim_v2_existing_v1 ...[2024-12-11 13:43:44.481995] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9299:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:05.219  [2024-12-11 13:43:44.482032] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9299:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:05.219  [2024-12-11 13:43:44.482124] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.482147] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.482160] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:05.219  passed
00:07:05.219    Test: claim_v1_existing_v2 ...passed
00:07:05.219    Test: examine_claimed ...[2024-12-11 13:43:44.482264] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9103:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.482290] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9103:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.482316] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9103:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:05.219  [2024-12-11 13:43:44.488544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:07:05.219  passed
00:07:05.219    Test: examine_claimed_manual ...[2024-12-11 13:43:44.519815] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9431:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:07:05.219  passed
00:07:05.219    Test: get_numa_id ...passed
00:07:05.219    Test: get_device_stat_with_reset ...passed
00:07:05.219    Test: open_ext_v2_test ...passed
00:07:05.219    Test: bdev_io_init_dif_ctx_test ...passed
00:07:05.219  
00:07:05.219  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.219                suites      1      1    n/a      0        0
00:07:05.219                 tests     64     64     64      0        0
00:07:05.219               asserts   4718   4718   4718      0      n/a
00:07:05.219  
00:07:05.219  Elapsed time =    1.386 seconds
00:07:05.219  [2024-12-11 13:43:44.577210] dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:05.219   13:43:44 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
00:07:05.219  
00:07:05.219  
00:07:05.219       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.219       http://cunit.sourceforge.net/
00:07:05.219  
00:07:05.219  
00:07:05.219  Suite: nvme
00:07:05.219    Test: test_create_ctrlr ...passed
00:07:05.219    Test: test_reset_ctrlr ...[2024-12-11 13:43:44.632814] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  passed
00:07:05.219    Test: test_race_between_reset_and_destruct_ctrlr ...passed
00:07:05.219    Test: test_failover_ctrlr ...passed
00:07:05.219    Test: test_race_between_failover_and_add_secondary_trid ...[2024-12-11 13:43:44.635674] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.635910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.636105] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  passed
00:07:05.219    Test: test_pending_reset ...[2024-12-11 13:43:44.638247] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.638524] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:07:05.219  passed
00:07:05.219    Test: test_attach_ctrlr ...passed
00:07:05.219    Test: test_aer_cb ...[2024-12-11 13:43:44.639752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:07:05.219  passed
00:07:05.219    Test: test_submit_nvme_cmd ...passed
00:07:05.219    Test: test_add_remove_trid ...passed
00:07:05.219    Test: test_abort ...[2024-12-11 13:43:44.643306] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7991:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure.
00:07:05.219  passed
00:07:05.219    Test: test_get_io_qpair ...passed
00:07:05.219    Test: test_bdev_unregister ...passed
00:07:05.219    Test: test_compare_ns ...passed
00:07:05.219    Test: test_init_ana_log_page ...passed
00:07:05.219    Test: test_get_memory_domains ...passed
00:07:05.219    Test: test_reconnect_qpair ...[2024-12-11 13:43:44.645885] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 17] Resetting controller failed.
00:07:05.219  passed
00:07:05.219    Test: test_create_bdev_ctrlr ...[2024-12-11 13:43:44.646747] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5784:bdev_nvme_check_multipath: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 18] cntlid 18 are duplicated.
00:07:05.219  passed
00:07:05.219    Test: test_add_multi_ns_to_bdev ...[2024-12-11 13:43:44.648361] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4929:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical.
00:07:05.219  passed
00:07:05.219    Test: test_add_multi_io_paths_to_nbdev_ch ...passed
00:07:05.219    Test: test_admin_path ...passed
00:07:05.219    Test: test_reset_bdev_ctrlr ...[2024-12-11 13:43:44.653376] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.653757] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.653886] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.654240] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.654515] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.654629] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.654971] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.655061] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.655273] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.655309] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.655424] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.655452] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:05.219  passed
00:07:05.219    Test: test_find_io_path ...passed
00:07:05.219    Test: test_retry_io_if_ana_state_is_updating ...passed
00:07:05.219    Test: test_retry_io_for_io_path_error ...passed
00:07:05.219    Test: test_retry_io_count ...passed
00:07:05.219    Test: test_concurrent_read_ana_log_page ...passed
00:07:05.219    Test: test_retry_io_for_ana_error ...passed
00:07:05.219    Test: test_check_io_error_resiliency_params ...[2024-12-11 13:43:44.658979] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6629:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1.
00:07:05.219  [2024-12-11 13:43:44.659145] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6633:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:07:05.219  [2024-12-11 13:43:44.659319] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6642:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:07:05.219  [2024-12-11 13:43:44.659512] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6645:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec.
00:07:05.219  [2024-12-11 13:43:44.659716] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6657:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:07:05.219  [2024-12-11 13:43:44.659865] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6657:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:07:05.219  [2024-12-11 13:43:44.660081] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6637:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec.
00:07:05.219  [2024-12-11 13:43:44.660232] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6652:bdev_nvme_check_io_error_resiliency_passed
00:07:05.219    Test: test_retry_io_if_ctrlr_is_resetting ...passed
00:07:05.219    Test: test_reconnect_ctrlr ...passed
00:07:05.219    Test: test_retry_failover_ctrlr ...params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec.
00:07:05.219  [2024-12-11 13:43:44.660417] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6649:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec.
00:07:05.219  [2024-12-11 13:43:44.661135] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.661211] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.661393] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.661464] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.661522] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  [2024-12-11 13:43:44.661831] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.219  passed
00:07:05.219    Test: test_fail_path ...[2024-12-11 13:43:44.662319] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:05.220  [2024-12-11 13:43:44.662450] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:05.220  [2024-12-11 13:43:44.662546] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:05.220  [2024-12-11 13:43:44.662612] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:05.220  [2024-12-11 13:43:44.662710] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:05.220  passed
00:07:05.220    Test: test_nvme_ns_cmp ...passed
00:07:05.220    Test: test_ana_transition ...passed
00:07:05.220    Test: test_set_preferred_path ...passed
00:07:05.220    Test: test_find_next_io_path ...passed
00:07:05.220    Test: test_find_io_path_min_qd ...passed
00:07:05.220    Test: test_disable_auto_failback ...[2024-12-11 13:43:44.664194] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 45] Resetting controller failed.
00:07:05.220  passed
00:07:05.220    Test: test_set_multipath_policy ...passed
00:07:05.220    Test: test_uuid_generation ...passed
00:07:05.220    Test: test_retry_io_to_same_path ...passed
00:07:05.220    Test: test_race_between_reset_and_disconnected ...passed
00:07:05.220    Test: test_ctrlr_op_rpc ...passed
00:07:05.220    Test: test_bdev_ctrlr_op_rpc ...passed
00:07:05.220    Test: test_disable_enable_ctrlr ...[2024-12-11 13:43:44.668542] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.220  passed
00:07:05.220    Test: test_delete_ctrlr_done ...passed
00:07:05.220    Test: test_ns_remove_during_reset ...passed
00:07:05.220    Test: test_io_path_is_current ...passed
00:07:05.220    Test: test_bdev_reset_abort_io ...passed
00:07:05.220    Test: test_race_between_clear_pending_resets_and_reset_ctrlr_complete ...[2024-12-11 13:43:44.668937] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:05.220  passed
00:07:05.220  
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      1      1    n/a      0        0
00:07:05.220                 tests     51     51     51      0        0
00:07:05.220               asserts   4017   4017   4017      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.035 seconds
00:07:05.220   13:43:44 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
00:07:05.220  
00:07:05.220  
00:07:05.220       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.220       http://cunit.sourceforge.net/
00:07:05.220  
00:07:05.220  Test Options
00:07:05.220  blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2
00:07:05.220  
00:07:05.220  Suite: raid
00:07:05.220    Test: test_create_raid ...passed
00:07:05.220    Test: test_create_raid_superblock ...passed
00:07:05.220    Test: test_delete_raid ...passed
00:07:05.220    Test: test_create_raid_invalid_args ...[2024-12-11 13:43:44.738907] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1521:_raid_bdev_create: *ERROR*: Unsupported raid level '-1'
00:07:05.220  [2024-12-11 13:43:44.739828] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1515:_raid_bdev_create: *ERROR*: Invalid strip size 1231
00:07:05.220  [2024-12-11 13:43:44.741340] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1505:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1
00:07:05.220  [2024-12-11 13:43:44.741924] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:07:05.220  [2024-12-11 13:43:44.742161] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:07:05.220  [2024-12-11 13:43:44.744371] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:07:05.220  [2024-12-11 13:43:44.744718] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:07:05.220  passed
00:07:05.220    Test: test_delete_raid_invalid_args ...passed
00:07:05.220    Test: test_io_channel ...passed
00:07:05.220    Test: test_reset_io ...passed
00:07:05.220    Test: test_multi_raid ...passed
00:07:05.220    Test: test_io_type_supported ...passed
00:07:05.220    Test: test_raid_json_dump_info ...passed
00:07:05.220    Test: test_context_size ...passed
00:07:05.220    Test: test_raid_level_conversions ...passed
00:07:05.220    Test: test_raid_io_split ...passed
00:07:05.220    Test: test_raid_process ...passed
00:07:05.220    Test: test_raid_process_with_qos ...passed
00:07:05.220  
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      1      1    n/a      0        0
00:07:05.220                 tests     15     15     15      0        0
00:07:05.220               asserts   6602   6602   6602      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.047 seconds
00:07:05.220   13:43:44 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
00:07:05.220  
00:07:05.220  
00:07:05.220       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.220       http://cunit.sourceforge.net/
00:07:05.220  
00:07:05.220  
00:07:05.220  Suite: raid_sb
00:07:05.220    Test: test_raid_bdev_write_superblock ...passed
00:07:05.220    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:05.220    Test: test_raid_bdev_parse_superblock ...[2024-12-11 13:43:44.824073] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:05.220  passed
00:07:05.220  Suite: raid_sb_md
00:07:05.220    Test: test_raid_bdev_write_superblock ...passed
00:07:05.220    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:05.220    Test: test_raid_bdev_parse_superblock ...passed[2024-12-11 13:43:44.824549] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:05.220  
00:07:05.220  Suite: raid_sb_md_interleaved
00:07:05.220    Test: test_raid_bdev_write_superblock ...passed
00:07:05.220    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:05.220    Test: test_raid_bdev_parse_superblock ...passed
00:07:05.220  
00:07:05.220  [2024-12-11 13:43:44.825059] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      3      3    n/a      0        0
00:07:05.220                 tests      9      9      9      0        0
00:07:05.220               asserts    139    139    139      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.002 seconds
00:07:05.220   13:43:44 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut
00:07:05.220  
00:07:05.220  
00:07:05.220       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.220       http://cunit.sourceforge.net/
00:07:05.220  
00:07:05.220  
00:07:05.220  Suite: concat
00:07:05.220    Test: test_concat_start ...passed
00:07:05.220    Test: test_concat_rw ...passed
00:07:05.220    Test: test_concat_null_payload ...passed
00:07:05.220  
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      1      1    n/a      0        0
00:07:05.220                 tests      3      3      3      0        0
00:07:05.220               asserts   8460   8460   8460      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.008 seconds
00:07:05.220   13:43:44 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut
00:07:05.220  
00:07:05.220  
00:07:05.220       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.220       http://cunit.sourceforge.net/
00:07:05.220  
00:07:05.220  
00:07:05.220  Suite: raid0
00:07:05.220    Test: test_write_io ...passed
00:07:05.220    Test: test_read_io ...passed
00:07:05.220    Test: test_unmap_io ...passed
00:07:05.220    Test: test_io_failure ...passed
00:07:05.220  Suite: raid0_dif
00:07:05.220    Test: test_write_io ...passed
00:07:05.220    Test: test_read_io ...passed
00:07:05.220    Test: test_unmap_io ...passed
00:07:05.220    Test: test_io_failure ...passed
00:07:05.220  
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      2      2    n/a      0        0
00:07:05.220                 tests      8      8      8      0        0
00:07:05.220               asserts 368291 368291 368291      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.184 seconds
00:07:05.220   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut
00:07:05.220  
00:07:05.220  
00:07:05.220       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.220       http://cunit.sourceforge.net/
00:07:05.220  
00:07:05.220  
00:07:05.220  Suite: raid1
00:07:05.220    Test: test_raid1_start ...passed
00:07:05.220    Test: test_raid1_read_balancing ...passed
00:07:05.220    Test: test_raid1_write_error ...passed
00:07:05.220    Test: test_raid1_read_error ...passed
00:07:05.220  
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      1      1    n/a      0        0
00:07:05.220                 tests      4      4      4      0        0
00:07:05.220               asserts   4374   4374   4374      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.009 seconds
00:07:05.220   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut
00:07:05.220  
00:07:05.220  
00:07:05.220       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.220       http://cunit.sourceforge.net/
00:07:05.220  
00:07:05.220  
00:07:05.220  Suite: zone
00:07:05.220    Test: test_zone_get_operation ...passed
00:07:05.220    Test: test_bdev_zone_get_info ...passed
00:07:05.220    Test: test_bdev_zone_management ...passed
00:07:05.220    Test: test_bdev_zone_append ...passed
00:07:05.220    Test: test_bdev_zone_append_with_md ...passed
00:07:05.220    Test: test_bdev_zone_appendv ...passed
00:07:05.220    Test: test_bdev_zone_appendv_with_md ...passed
00:07:05.220    Test: test_bdev_io_get_append_location ...passed
00:07:05.220  
00:07:05.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.220                suites      1      1    n/a      0        0
00:07:05.220                 tests      8      8      8      0        0
00:07:05.220               asserts     94     94     94      0      n/a
00:07:05.220  
00:07:05.220  Elapsed time =    0.001 seconds
00:07:05.220   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut
00:07:05.220  
00:07:05.221  
00:07:05.221       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.221       http://cunit.sourceforge.net/
00:07:05.221  
00:07:05.221  
00:07:05.221  Suite: gpt_parse
00:07:05.221    Test: test_parse_mbr_and_primary ...[2024-12-11 13:43:45.258726] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:05.221  [2024-12-11 13:43:45.258989] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:05.221  [2024-12-11 13:43:45.259071] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:07:05.221  [2024-12-11 13:43:45.259102] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:07:05.221  [2024-12-11 13:43:45.259150] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:07:05.221  [2024-12-11 13:43:45.259188] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:07:05.221  passed
00:07:05.221    Test: test_parse_secondary ...[2024-12-11 13:43:45.259849] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:07:05.221  [2024-12-11 13:43:45.259875] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:07:05.221  [2024-12-11 13:43:45.259919] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:07:05.221  [2024-12-11 13:43:45.259959] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:07:05.221  passed
00:07:05.221    Test: test_check_mbr ...[2024-12-11 13:43:45.260541] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:05.221  [2024-12-11 13:43:45.260578] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:05.221  passed
00:07:05.221    Test: test_read_header ...[2024-12-11 13:43:45.260726] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600
00:07:05.221  [2024-12-11 13:43:45.260755] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438
00:07:05.221  [2024-12-11 13:43:45.260817] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match
00:07:05.221  [2024-12-11 13:43:45.260857] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1)
00:07:05.221  passed
00:07:05.221    Test: test_read_partitions ...[2024-12-11 13:43:45.260894] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0)
00:07:05.221  [2024-12-11 13:43:45.260922] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error
00:07:05.221  [2024-12-11 13:43:45.261031] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128
00:07:05.221  [2024-12-11 13:43:45.261057] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80)
00:07:05.221  [2024-12-11 13:43:45.261087] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough
00:07:05.221  [2024-12-11 13:43:45.261125] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf
00:07:05.221  [2024-12-11 13:43:45.261426] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match
00:07:05.221  passed
00:07:05.221  
00:07:05.221  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.221                suites      1      1    n/a      0        0
00:07:05.221                 tests      5      5      5      0        0
00:07:05.221               asserts     33     33     33      0      n/a
00:07:05.221  
00:07:05.221  Elapsed time =    0.003 seconds
00:07:05.221   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut
00:07:05.221  
00:07:05.221  
00:07:05.221       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.221       http://cunit.sourceforge.net/
00:07:05.221  
00:07:05.221  
00:07:05.221  Suite: bdev_part
00:07:05.221    Test: part_test ...[2024-12-11 13:43:45.303078] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 7b9cd665-52a6-532b-96b7-02ee5c4225ca already exists
00:07:05.221  [2024-12-11 13:43:45.303437] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:7b9cd665-52a6-532b-96b7-02ee5c4225ca alias for bdev test1
00:07:05.221  passed
00:07:05.221    Test: part_free_test ...passed
00:07:05.221    Test: part_get_io_channel_test ...passed
00:07:05.221    Test: part_construct_ext ...passed
00:07:05.221  
00:07:05.221  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.221                suites      1      1    n/a      0        0
00:07:05.221                 tests      4      4      4      0        0
00:07:05.221               asserts     48     48     48      0      n/a
00:07:05.221  
00:07:05.221  Elapsed time =    0.060 seconds
00:07:05.221   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut
00:07:05.221  
00:07:05.221  
00:07:05.221       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.221       http://cunit.sourceforge.net/
00:07:05.221  
00:07:05.221  
00:07:05.221  Suite: scsi_nvme_suite
00:07:05.221    Test: scsi_nvme_translate_test ...passed
00:07:05.221  
00:07:05.221  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.221                suites      1      1    n/a      0        0
00:07:05.221                 tests      1      1      1      0        0
00:07:05.221               asserts    104    104    104      0      n/a
00:07:05.221  
00:07:05.221  Elapsed time =    0.000 seconds
00:07:05.221   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut
00:07:05.221  
00:07:05.221  
00:07:05.221       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.221       http://cunit.sourceforge.net/
00:07:05.221  
00:07:05.221  
00:07:05.221  Suite: lvol
00:07:05.221    Test: ut_lvs_init ...[2024-12-11 13:43:45.453133] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev
00:07:05.221  [2024-12-11 13:43:45.453712] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 275:vbdev_lvs_create_ext: *ERROR*: Cannot create blobstore device
00:07:05.221  passed
00:07:05.221    Test: ut_lvol_init ...passed
00:07:05.221    Test: ut_lvol_snapshot ...passed
00:07:05.221    Test: ut_lvol_clone ...passed
00:07:05.221    Test: ut_lvs_destroy ...passed
00:07:05.221    Test: ut_lvs_unload ...passed
00:07:05.221    Test: ut_lvol_resize ...[2024-12-11 13:43:45.455786] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1422:vbdev_lvol_resize: *ERROR*: lvol does not exist
00:07:05.221  passed
00:07:05.221    Test: ut_lvol_set_read_only ...passed
00:07:05.221    Test: ut_lvol_hotremove ...passed
00:07:05.221    Test: ut_vbdev_lvol_get_io_channel ...passed
00:07:05.221    Test: ut_vbdev_lvol_io_type_supported ...passed
00:07:05.221    Test: ut_lvol_read_write ...passed
00:07:05.221    Test: ut_vbdev_lvol_submit_request ...passed
00:07:05.221    Test: ut_lvol_examine_config ...passed
00:07:05.221    Test: ut_lvol_examine_disk ...[2024-12-11 13:43:45.456723] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1564:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID
00:07:05.221  passed
00:07:05.221    Test: ut_lvol_rename ...[2024-12-11 13:43:45.458065] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name'
00:07:05.221  [2024-12-11 13:43:45.458141] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1372:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed
00:07:05.221  passed
00:07:05.221    Test: ut_bdev_finish ...passed
00:07:05.221    Test: ut_lvs_rename ...passed
00:07:05.221    Test: ut_lvol_seek ...passed
00:07:05.221    Test: ut_esnap_dev_create ...[2024-12-11 13:43:45.459075] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1907:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID
00:07:05.221  [2024-12-11 13:43:45.459143] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1913:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36)
00:07:05.221  passed
00:07:05.221    Test: ut_lvol_esnap_clone_bad_args ...[2024-12-11 13:43:45.459183] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1918:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID
00:07:05.221  [2024-12-11 13:43:45.459332] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1308:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified
00:07:05.221  [2024-12-11 13:43:45.459382] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1315:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:07:05.221  passed
00:07:05.221    Test: ut_lvol_shallow_copy ...[2024-12-11 13:43:45.459748] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2005:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:07:05.221  [2024-12-11 13:43:45.459799] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2010:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL
00:07:05.221  passed
00:07:05.221    Test: ut_lvol_set_external_parent ...passed
00:07:05.221  
00:07:05.221  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.221                suites      1      1    n/a      0        0
00:07:05.221                 tests     23     23     23      0        0
00:07:05.221               asserts    770    770    770      0      n/a
00:07:05.221  
00:07:05.221  Elapsed time =    0.007 seconds
00:07:05.221  [2024-12-11 13:43:45.459933] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2065:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:07:05.221   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut
00:07:05.221  
00:07:05.221  
00:07:05.221       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.221       http://cunit.sourceforge.net/
00:07:05.221  
00:07:05.221  
00:07:05.221  Suite: zone_block
00:07:05.221    Test: test_zone_block_create ...passed
00:07:05.221    Test: test_zone_block_create_invalid ...[2024-12-11 13:43:45.530870] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed
00:07:05.221  [2024-12-11 13:43:45.531151] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-11 13:43:45.531403] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev
00:07:05.221  [2024-12-11 13:43:45.531492] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-11 13:43:45.531820] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0
00:07:05.221  [2024-12-11 13:43:45.531880] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-12-11 13:43:45.532010] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0
00:07:05.222  [2024-12-11 13:43:45.532077] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed
00:07:05.222    Test: test_get_zone_info ...[2024-12-11 13:43:45.533106] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.533215] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.533308] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_supported_io_types ...passed
00:07:05.222    Test: test_reset_zone ...[2024-12-11 13:43:45.534793] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.534851] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_open_zone ...[2024-12-11 13:43:45.535493] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.536289] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.536399] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_zone_write ...[2024-12-11 13:43:45.537236] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:07:05.222  [2024-12-11 13:43:45.537294] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.537366] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:07:05.222  [2024-12-11 13:43:45.537391] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.544991] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405)
00:07:05.222  [2024-12-11 13:43:45.545054] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.545162] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405)
00:07:05.222  [2024-12-11 13:43:45.545213] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.552397] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:07:05.222  [2024-12-11 13:43:45.552472] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_zone_read ...[2024-12-11 13:43:45.553101] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10)
00:07:05.222  [2024-12-11 13:43:45.553160] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.553256] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000)
00:07:05.222  [2024-12-11 13:43:45.553296] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.554076] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10)
00:07:05.222  [2024-12-11 13:43:45.554125] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_close_zone ...[2024-12-11 13:43:45.554730] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.554820] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.555149] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.555207] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_finish_zone ...[2024-12-11 13:43:45.555976] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.556065] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222    Test: test_append_zone ...[2024-12-11 13:43:45.556549] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:07:05.222  [2024-12-11 13:43:45.556593] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.556671] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:07:05.222  [2024-12-11 13:43:45.556694] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  [2024-12-11 13:43:45.570881] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:07:05.222  [2024-12-11 13:43:45.570972] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:05.222  passed
00:07:05.222  
00:07:05.222  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.222                suites      1      1    n/a      0        0
00:07:05.222                 tests     11     11     11      0        0
00:07:05.222               asserts   3437   3437   3437      0      n/a
00:07:05.222  
00:07:05.222  Elapsed time =    0.042 seconds
00:07:05.222   13:43:45 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut
00:07:05.222  
00:07:05.222  
00:07:05.222       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.222       http://cunit.sourceforge.net/
00:07:05.222  
00:07:05.222  
00:07:05.222  Suite: bdev
00:07:05.222    Test: basic ...[2024-12-11 13:43:45.689051] thread.c:2418:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x61dc36a27821): Operation not permitted (rc=-1)
00:07:05.222  [2024-12-11 13:43:45.689540] thread.c:2418:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x61dc36a277e0): Operation not permitted (rc=-1)
00:07:05.222  [2024-12-11 13:43:45.689670] thread.c:2418:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x61dc36a27821): Operation not permitted (rc=-1)
00:07:05.222  passed
00:07:05.222    Test: unregister_and_close ...passed
00:07:05.222    Test: unregister_and_close_different_threads ...passed
00:07:05.222    Test: basic_qos ...passed
00:07:05.222    Test: put_channel_during_reset ...passed
00:07:05.222    Test: aborted_reset ...passed
00:07:05.222    Test: aborted_reset_no_outstanding_io ...passed
00:07:05.222    Test: io_during_reset ...passed
00:07:05.222    Test: reset_completions ...passed
00:07:05.222    Test: io_during_qos_queue ...passed
00:07:05.222    Test: io_during_qos_reset ...passed
00:07:05.222    Test: enomem ...passed
00:07:05.222    Test: enomem_multi_bdev ...passed
00:07:05.222    Test: enomem_multi_bdev_unregister ...passed
00:07:05.222    Test: enomem_multi_io_target ...passed
00:07:05.222    Test: enomem_retry_during_abort ...passed
00:07:05.222    Test: qos_dynamic_enable ...passed
00:07:05.222    Test: bdev_histograms_mt ...passed
00:07:05.222    Test: bdev_set_io_timeout_mt ...[2024-12-11 13:43:46.396313] thread.c: 494:spdk_thread_lib_fini: *ERROR*: io_device 0x513000000c80 not unregistered
00:07:05.222  passed
00:07:05.222    Test: lock_lba_range_then_submit_io ...[2024-12-11 13:43:46.404164] thread.c:2222:spdk_io_device_register: *ERROR*: io_device 0x61dc36a277a0 already registered (old:0x513000000c80 new:0x513000000740)
00:07:05.222  passed
00:07:05.222    Test: unregister_during_reset ...passed
00:07:05.222    Test: event_notify_and_close ...passed
00:07:05.222    Test: unregister_and_qos_poller ...passed
00:07:05.222    Test: reset_start_complete_race ...passed
00:07:05.222  Suite: bdev_wrong_thread
00:07:05.222    Test: spdk_bdev_register_wt ...[2024-12-11 13:43:46.592017] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9060:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x519000000580 (0x519000000580)
00:07:05.222  passed
00:07:05.222    Test: spdk_bdev_examine_wt ...[2024-12-11 13:43:46.592298] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 841:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x519000000580 (0x519000000580)
00:07:05.222  passed
00:07:05.222  
00:07:05.222  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:05.222                suites      2      2    n/a      0        0
00:07:05.222                 tests     26     26     26      0        0
00:07:05.222               asserts    679    679    679      0      n/a
00:07:05.222  
00:07:05.222  Elapsed time =    0.918 seconds
00:07:05.222  
00:07:05.222  real	0m3.454s
00:07:05.222  user	0m1.632s
00:07:05.223  sys	0m1.819s
00:07:05.223   13:43:46 unittest.unittest_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:05.223   13:43:46 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x
00:07:05.223  ************************************
00:07:05.223  END TEST unittest_bdev
00:07:05.223  ************************************
00:07:05.223   13:43:46 unittest -- unit/unittest.sh@197 -- # [[ n == y ]]
00:07:05.223   13:43:46 unittest -- unit/unittest.sh@202 -- # [[ n == y ]]
00:07:05.223   13:43:46 unittest -- unit/unittest.sh@207 -- # [[ n == y ]]
00:07:05.223   13:43:46 unittest -- unit/unittest.sh@211 -- # [[ n == y ]]
00:07:05.223   13:43:46 unittest -- unit/unittest.sh@215 -- # run_test unittest_blob_blobfs unittest_blob
00:07:05.223   13:43:46 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:05.223   13:43:46 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:05.223   13:43:46 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:05.223  ************************************
00:07:05.223  START TEST unittest_blob_blobfs
00:07:05.223  ************************************
00:07:05.223   13:43:46 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1129 -- # unittest_blob
00:07:05.223   13:43:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]]
00:07:05.223   13:43:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut
00:07:05.223  
00:07:05.223  
00:07:05.223       CUnit - A unit testing framework for C - Version 2.1-3
00:07:05.223       http://cunit.sourceforge.net/
00:07:05.223  
00:07:05.223  
00:07:05.223  Suite: blob_nocopy_noextent
00:07:05.223    Test: blob_init ...[2024-12-11 13:43:46.699067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5527:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:05.223  passed
00:07:05.223    Test: blob_thin_provision ...passed
00:07:05.223    Test: blob_read_only ...passed
00:07:05.223    Test: bs_load ...[2024-12-11 13:43:46.816358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:05.223  passed
00:07:05.223    Test: bs_load_custom_cluster_size ...passed
00:07:05.223    Test: bs_load_after_failed_grow ...passed
00:07:05.223    Test: bs_load_error ...passed
00:07:05.223    Test: bs_cluster_sz ...[2024-12-11 13:43:46.857125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:05.223  [2024-12-11 13:43:46.857543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5661:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:05.223  [2024-12-11 13:43:46.857613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3839:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:07:05.223  passed
00:07:05.223    Test: bs_resize_md ...passed
00:07:05.223    Test: bs_destroy ...passed
00:07:05.223    Test: bs_type ...passed
00:07:05.223    Test: bs_super_block ...passed
00:07:05.223    Test: bs_test_recover_cluster_count ...passed
00:07:05.223    Test: bs_grow_live ...passed
00:07:05.223    Test: bs_grow_live_no_space ...passed
00:07:05.223    Test: bs_test_grow ...passed
00:07:05.223    Test: blob_serialize_test ...passed
00:07:05.223    Test: super_block_crc ...passed
00:07:05.223    Test: blob_thin_prov_write_count_io ...passed
00:07:05.223    Test: blob_thin_prov_unmap_cluster ...passed
00:07:05.223    Test: bs_load_iter_test ...passed
00:07:05.223    Test: blob_relations ...[2024-12-11 13:43:47.094981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:05.223  [2024-12-11 13:43:47.095067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.096096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:05.223  [2024-12-11 13:43:47.096148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  passed
00:07:05.223    Test: blob_relations2 ...[2024-12-11 13:43:47.111052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:05.223  [2024-12-11 13:43:47.111134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.111179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:05.223  [2024-12-11 13:43:47.111193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.112863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:05.223  [2024-12-11 13:43:47.112920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.113417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:05.223  [2024-12-11 13:43:47.113460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  passed
00:07:05.223    Test: blob_relations3 ...passed
00:07:05.223    Test: blobstore_clean_power_failure ...passed
00:07:05.223    Test: blob_delete_snapshot_power_failure ...[2024-12-11 13:43:47.269729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:05.223  [2024-12-11 13:43:47.282334] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:05.223  [2024-12-11 13:43:47.282415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:05.223  [2024-12-11 13:43:47.282442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.294961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:05.223  [2024-12-11 13:43:47.295037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:05.223  [2024-12-11 13:43:47.295077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:05.223  [2024-12-11 13:43:47.295103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.307595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:05.223  [2024-12-11 13:43:47.307742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.320371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:05.223  [2024-12-11 13:43:47.320505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  [2024-12-11 13:43:47.333305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:05.223  [2024-12-11 13:43:47.333428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:05.223  passed
00:07:05.223    Test: blob_create_snapshot_power_failure ...[2024-12-11 13:43:47.370764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:05.223  [2024-12-11 13:43:47.395075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:05.223  [2024-12-11 13:43:47.407398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:05.223  passed
00:07:05.223    Test: blob_io_unit ...passed
00:07:05.223    Test: blob_io_unit_compatibility ...passed
00:07:05.223    Test: blob_ext_md_pages ...passed
00:07:05.223    Test: blob_esnap_io_4096_4096 ...passed
00:07:05.223    Test: blob_esnap_io_512_512 ...passed
00:07:05.223    Test: blob_esnap_io_4096_512 ...passed
00:07:05.223    Test: blob_esnap_io_512_4096 ...passed
00:07:05.223    Test: blob_esnap_clone_resize ...passed
00:07:05.223  Suite: blob_bs_nocopy_noextent
00:07:05.223    Test: blob_open ...passed
00:07:05.223    Test: blob_create ...[2024-12-11 13:43:47.688410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:05.223  passed
00:07:05.223    Test: blob_create_loop ...passed
00:07:05.223    Test: blob_create_fail ...[2024-12-11 13:43:47.785726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:05.223  passed
00:07:05.223    Test: blob_create_internal ...passed
00:07:05.223    Test: blob_create_zero_extent ...passed
00:07:05.223    Test: blob_snapshot ...passed
00:07:05.223    Test: blob_clone ...passed
00:07:05.223    Test: blob_inflate ...[2024-12-11 13:43:47.970493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:05.223  passed
00:07:05.484    Test: blob_delete ...passed
00:07:05.484    Test: blob_resize_test ...[2024-12-11 13:43:48.036037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:05.484  passed
00:07:05.484    Test: blob_resize_thin_test ...passed
00:07:05.484    Test: channel_ops ...passed
00:07:05.484    Test: blob_super ...passed
00:07:05.484    Test: blob_rw_verify_iov ...passed
00:07:05.484    Test: blob_unmap ...passed
00:07:05.484    Test: blob_iter ...passed
00:07:05.743    Test: blob_parse_md ...passed
00:07:05.743    Test: bs_load_pending_removal ...passed
00:07:05.743    Test: bs_unload ...[2024-12-11 13:43:48.334290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:05.743  passed
00:07:05.743    Test: bs_usable_clusters ...passed
00:07:05.743    Test: blob_crc ...[2024-12-11 13:43:48.399549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:05.743  [2024-12-11 13:43:48.399670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:05.743  passed
00:07:05.743    Test: blob_flags ...passed
00:07:05.743    Test: bs_version ...passed
00:07:05.743    Test: blob_set_xattrs_test ...[2024-12-11 13:43:48.500433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:05.743  [2024-12-11 13:43:48.500527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:05.743  passed
00:07:06.003    Test: blob_thin_prov_alloc ...passed
00:07:06.003    Test: blob_insert_cluster_msg_test ...passed
00:07:06.003    Test: blob_thin_prov_rw ...passed
00:07:06.003    Test: blob_thin_prov_rle ...passed
00:07:06.262    Test: blob_thin_prov_rw_iov ...passed
00:07:06.262    Test: blob_snapshot_rw ...passed
00:07:06.262    Test: blob_snapshot_rw_iov ...passed
00:07:06.520    Test: blob_inflate_rw ...passed
00:07:06.520    Test: blob_snapshot_freeze_io ...passed
00:07:06.779    Test: blob_operation_split_rw ...passed
00:07:06.779    Test: blob_operation_split_rw_iov ...passed
00:07:06.779    Test: blob_simultaneous_operations ...[2024-12-11 13:43:49.510065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:06.779  [2024-12-11 13:43:49.510158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:06.779  [2024-12-11 13:43:49.511727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:06.779  [2024-12-11 13:43:49.511777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:06.779  [2024-12-11 13:43:49.527271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:06.779  [2024-12-11 13:43:49.527350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:06.779  [2024-12-11 13:43:49.527481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:06.779  [2024-12-11 13:43:49.527500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:06.779  passed
00:07:07.037    Test: blob_persist_test ...passed
00:07:07.037    Test: blob_decouple_snapshot ...passed
00:07:07.037    Test: blob_seek_io_unit ...passed
00:07:07.037    Test: blob_nested_freezes ...passed
00:07:07.037    Test: blob_clone_resize ...passed
00:07:07.296    Test: blob_shallow_copy ...[2024-12-11 13:43:49.827571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:07.296  [2024-12-11 13:43:49.827900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7385:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:07.296  [2024-12-11 13:43:49.828139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7393:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:07.296  passed
00:07:07.296  Suite: blob_blob_nocopy_noextent
00:07:07.296    Test: blob_write ...passed
00:07:07.296    Test: blob_read ...passed
00:07:07.296    Test: blob_rw_verify ...passed
00:07:07.296    Test: blob_rw_verify_iov_nomem ...passed
00:07:07.296    Test: blob_rw_iov_read_only ...passed
00:07:07.296    Test: blob_xattr ...passed
00:07:07.555    Test: blob_dirty_shutdown ...passed
00:07:07.555    Test: blob_is_degraded ...passed
00:07:07.555  Suite: blob_esnap_bs_nocopy_noextent
00:07:07.555    Test: blob_esnap_create ...passed
00:07:07.555    Test: blob_esnap_thread_add_remove ...passed
00:07:07.555    Test: blob_esnap_clone_snapshot ...passed
00:07:07.555    Test: blob_esnap_clone_inflate ...passed
00:07:07.555    Test: blob_esnap_clone_decouple ...passed
00:07:07.555    Test: blob_esnap_clone_reload ...passed
00:07:07.813    Test: blob_esnap_hotplug ...passed
00:07:07.813    Test: blob_set_parent ...[2024-12-11 13:43:50.380848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:07.813  [2024-12-11 13:43:50.380967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:07.813  [2024-12-11 13:43:50.381125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:07.813  [2024-12-11 13:43:50.381154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:07.813  [2024-12-11 13:43:50.381802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:07.813  passed
00:07:07.813    Test: blob_set_external_parent ...[2024-12-11 13:43:50.415274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:07.813  [2024-12-11 13:43:50.415357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7839:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:07.813  [2024-12-11 13:43:50.415388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:07.813  [2024-12-11 13:43:50.415914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:07.813  passed
00:07:07.813  Suite: blob_nocopy_extent
00:07:07.813    Test: blob_init ...[2024-12-11 13:43:50.427474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5527:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:07.813  passed
00:07:07.813    Test: blob_thin_provision ...passed
00:07:07.813    Test: blob_read_only ...passed
00:07:07.813    Test: bs_load ...[2024-12-11 13:43:50.473958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:07.813  passed
00:07:07.813    Test: bs_load_custom_cluster_size ...passed
00:07:07.813    Test: bs_load_after_failed_grow ...passed
00:07:07.813    Test: bs_load_error ...passed
00:07:07.813    Test: bs_cluster_sz ...[2024-12-11 13:43:50.511976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:07.813  [2024-12-11 13:43:50.512279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5661:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:07.813  [2024-12-11 13:43:50.512329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3839:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:07:07.813  passed
00:07:07.813    Test: bs_resize_md ...passed
00:07:07.813    Test: bs_destroy ...passed
00:07:07.813    Test: bs_type ...passed
00:07:07.813    Test: bs_super_block ...passed
00:07:07.813    Test: bs_test_recover_cluster_count ...passed
00:07:07.813    Test: bs_grow_live ...passed
00:07:07.813    Test: bs_grow_live_no_space ...passed
00:07:08.071    Test: bs_test_grow ...passed
00:07:08.071    Test: blob_serialize_test ...passed
00:07:08.071    Test: super_block_crc ...passed
00:07:08.071    Test: blob_thin_prov_write_count_io ...passed
00:07:08.071    Test: blob_thin_prov_unmap_cluster ...passed
00:07:08.071    Test: bs_load_iter_test ...passed
00:07:08.071    Test: blob_relations ...[2024-12-11 13:43:50.733358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:08.071  [2024-12-11 13:43:50.733447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.071  [2024-12-11 13:43:50.734515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:08.071  [2024-12-11 13:43:50.734567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.071  passed
00:07:08.071    Test: blob_relations2 ...[2024-12-11 13:43:50.750065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:08.071  [2024-12-11 13:43:50.750146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.071  [2024-12-11 13:43:50.750175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:08.071  [2024-12-11 13:43:50.750189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.071  [2024-12-11 13:43:50.751790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:08.071  [2024-12-11 13:43:50.751847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.071  [2024-12-11 13:43:50.752306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:08.071  [2024-12-11 13:43:50.752348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.071  passed
00:07:08.071    Test: blob_relations3 ...passed
00:07:08.330    Test: blobstore_clean_power_failure ...passed
00:07:08.330    Test: blob_delete_snapshot_power_failure ...[2024-12-11 13:43:50.909620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:08.330  [2024-12-11 13:43:50.922163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:08.330  [2024-12-11 13:43:50.934573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:08.330  [2024-12-11 13:43:50.934673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:08.330  [2024-12-11 13:43:50.934703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.330  [2024-12-11 13:43:50.947272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:08.330  [2024-12-11 13:43:50.947346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:08.330  [2024-12-11 13:43:50.947385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:08.330  [2024-12-11 13:43:50.947412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.330  [2024-12-11 13:43:50.960089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:08.330  [2024-12-11 13:43:50.960171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:08.330  [2024-12-11 13:43:50.960219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:08.330  [2024-12-11 13:43:50.960251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.330  [2024-12-11 13:43:50.973014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:08.330  [2024-12-11 13:43:50.973121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.330  [2024-12-11 13:43:50.985898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:08.330  [2024-12-11 13:43:50.986027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.330  [2024-12-11 13:43:50.998862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:08.330  [2024-12-11 13:43:50.998950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:08.330  passed
00:07:08.330    Test: blob_create_snapshot_power_failure ...[2024-12-11 13:43:51.036788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:08.330  [2024-12-11 13:43:51.049032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:08.330  [2024-12-11 13:43:51.073185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:08.330  [2024-12-11 13:43:51.085711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:08.589  passed
00:07:08.589    Test: blob_io_unit ...passed
00:07:08.589    Test: blob_io_unit_compatibility ...passed
00:07:08.589    Test: blob_ext_md_pages ...passed
00:07:08.589    Test: blob_esnap_io_4096_4096 ...passed
00:07:08.589    Test: blob_esnap_io_512_512 ...passed
00:07:08.589    Test: blob_esnap_io_4096_512 ...passed
00:07:08.589    Test: blob_esnap_io_512_4096 ...passed
00:07:08.590    Test: blob_esnap_clone_resize ...passed
00:07:08.590  Suite: blob_bs_nocopy_extent
00:07:08.590    Test: blob_open ...passed
00:07:08.590    Test: blob_create ...[2024-12-11 13:43:51.361999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:08.848  passed
00:07:08.848    Test: blob_create_loop ...passed
00:07:08.848    Test: blob_create_fail ...[2024-12-11 13:43:51.468200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:08.848  passed
00:07:08.848    Test: blob_create_internal ...passed
00:07:08.848    Test: blob_create_zero_extent ...passed
00:07:08.848    Test: blob_snapshot ...passed
00:07:08.848    Test: blob_clone ...passed
00:07:09.136    Test: blob_inflate ...[2024-12-11 13:43:51.653008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:09.136  passed
00:07:09.136    Test: blob_delete ...passed
00:07:09.136    Test: blob_resize_test ...[2024-12-11 13:43:51.717826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:09.136  passed
00:07:09.136    Test: blob_resize_thin_test ...passed
00:07:09.136    Test: channel_ops ...passed
00:07:09.136    Test: blob_super ...passed
00:07:09.136    Test: blob_rw_verify_iov ...passed
00:07:09.405    Test: blob_unmap ...passed
00:07:09.405    Test: blob_iter ...passed
00:07:09.405    Test: blob_parse_md ...passed
00:07:09.405    Test: bs_load_pending_removal ...passed
00:07:09.405    Test: bs_unload ...[2024-12-11 13:43:52.022842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:09.405  passed
00:07:09.405    Test: bs_usable_clusters ...passed
00:07:09.405    Test: blob_crc ...[2024-12-11 13:43:52.089948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:09.405  [2024-12-11 13:43:52.090086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:09.405  passed
00:07:09.405    Test: blob_flags ...passed
00:07:09.405    Test: bs_version ...passed
00:07:09.665    Test: blob_set_xattrs_test ...[2024-12-11 13:43:52.190995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:09.665  [2024-12-11 13:43:52.191088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:09.665  passed
00:07:09.665    Test: blob_thin_prov_alloc ...passed
00:07:09.665    Test: blob_insert_cluster_msg_test ...passed
00:07:09.665    Test: blob_thin_prov_rw ...passed
00:07:09.665    Test: blob_thin_prov_rle ...passed
00:07:09.923    Test: blob_thin_prov_rw_iov ...passed
00:07:09.923    Test: blob_snapshot_rw ...passed
00:07:09.923    Test: blob_snapshot_rw_iov ...passed
00:07:10.181    Test: blob_inflate_rw ...passed
00:07:10.181    Test: blob_snapshot_freeze_io ...passed
00:07:10.439    Test: blob_operation_split_rw ...passed
00:07:10.439    Test: blob_operation_split_rw_iov ...passed
00:07:10.440    Test: blob_simultaneous_operations ...[2024-12-11 13:43:53.170387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:10.440  [2024-12-11 13:43:53.170492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.440  [2024-12-11 13:43:53.171820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:10.440  [2024-12-11 13:43:53.171861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.440  [2024-12-11 13:43:53.185262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:10.440  [2024-12-11 13:43:53.185335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.440  [2024-12-11 13:43:53.185450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:10.440  [2024-12-11 13:43:53.185466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.440  passed
00:07:10.698    Test: blob_persist_test ...passed
00:07:10.698    Test: blob_decouple_snapshot ...passed
00:07:10.698    Test: blob_seek_io_unit ...passed
00:07:10.698    Test: blob_nested_freezes ...passed
00:07:10.698    Test: blob_clone_resize ...passed
00:07:10.698    Test: blob_shallow_copy ...[2024-12-11 13:43:53.467011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:10.698  [2024-12-11 13:43:53.467280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7385:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:10.698  [2024-12-11 13:43:53.467446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7393:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:10.957  passed
00:07:10.957  Suite: blob_blob_nocopy_extent
00:07:10.957    Test: blob_write ...passed
00:07:10.957    Test: blob_read ...passed
00:07:10.957    Test: blob_rw_verify ...passed
00:07:10.957    Test: blob_rw_verify_iov_nomem ...passed
00:07:10.957    Test: blob_rw_iov_read_only ...passed
00:07:10.957    Test: blob_xattr ...passed
00:07:10.957    Test: blob_dirty_shutdown ...passed
00:07:11.216    Test: blob_is_degraded ...passed
00:07:11.216  Suite: blob_esnap_bs_nocopy_extent
00:07:11.216    Test: blob_esnap_create ...passed
00:07:11.216    Test: blob_esnap_thread_add_remove ...passed
00:07:11.216    Test: blob_esnap_clone_snapshot ...passed
00:07:11.216    Test: blob_esnap_clone_inflate ...passed
00:07:11.216    Test: blob_esnap_clone_decouple ...passed
00:07:11.216    Test: blob_esnap_clone_reload ...passed
00:07:11.216    Test: blob_esnap_hotplug ...passed
00:07:11.474    Test: blob_set_parent ...[2024-12-11 13:43:54.001506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:11.474  [2024-12-11 13:43:54.001594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:11.474  [2024-12-11 13:43:54.001712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:11.474  [2024-12-11 13:43:54.001740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:11.474  [2024-12-11 13:43:54.002190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:11.474  passed
00:07:11.474    Test: blob_set_external_parent ...[2024-12-11 13:43:54.035705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:11.474  [2024-12-11 13:43:54.035784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7839:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:11.474  [2024-12-11 13:43:54.035822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:11.475  [2024-12-11 13:43:54.036179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:11.475  passed
00:07:11.475  Suite: blob_nocopy_extent_16k_phys
00:07:11.475    Test: blob_init ...[2024-12-11 13:43:54.047555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5527:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:11.475  passed
00:07:11.475    Test: blob_thin_provision ...passed
00:07:11.475    Test: blob_read_only ...passed
00:07:11.475    Test: bs_load ...[2024-12-11 13:43:54.093371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:11.475  passed
00:07:11.475    Test: bs_load_custom_cluster_size ...passed
00:07:11.475    Test: bs_load_after_failed_grow ...passed
00:07:11.475    Test: bs_load_error ...passed
00:07:11.475    Test: bs_cluster_sz ...[2024-12-11 13:43:54.130812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:11.475  [2024-12-11 13:43:54.131016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5661:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:11.475  [2024-12-11 13:43:54.131054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3839:bs_opts_verify: *ERROR*: Cluster size 16383 is not an integral multiple of blocklen 4096
00:07:11.475  passed
00:07:11.475    Test: bs_resize_md ...passed
00:07:11.475    Test: bs_destroy ...passed
00:07:11.475    Test: bs_type ...passed
00:07:11.475    Test: bs_super_block ...passed
00:07:11.475    Test: bs_test_recover_cluster_count ...passed
00:07:11.475    Test: bs_grow_live ...passed
00:07:11.475    Test: bs_grow_live_no_space ...passed
00:07:11.475    Test: bs_test_grow ...passed
00:07:11.475    Test: blob_serialize_test ...passed
00:07:11.475    Test: super_block_crc ...passed
00:07:11.734    Test: blob_thin_prov_write_count_io ...passed
00:07:11.734    Test: blob_thin_prov_unmap_cluster ...passed
00:07:11.734    Test: bs_load_iter_test ...passed
00:07:11.734    Test: blob_relations ...[2024-12-11 13:43:54.349348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:11.734  [2024-12-11 13:43:54.349438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.734  [2024-12-11 13:43:54.350847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:11.734  [2024-12-11 13:43:54.350897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.734  passed
00:07:11.734    Test: blob_relations2 ...[2024-12-11 13:43:54.366415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:11.734  [2024-12-11 13:43:54.366490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.734  [2024-12-11 13:43:54.366514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:11.734  [2024-12-11 13:43:54.366528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.734  [2024-12-11 13:43:54.371409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:11.734  [2024-12-11 13:43:54.371493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.734  [2024-12-11 13:43:54.372148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:11.734  [2024-12-11 13:43:54.372194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.734  passed
00:07:11.734    Test: blob_relations3 ...passed
00:07:11.993    Test: blobstore_clean_power_failure ...passed
00:07:11.993    Test: blob_delete_snapshot_power_failure ...[2024-12-11 13:43:54.528672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:11.993  [2024-12-11 13:43:54.541450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:11.993  [2024-12-11 13:43:54.554361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:11.993  [2024-12-11 13:43:54.554447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:11.993  [2024-12-11 13:43:54.554470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.993  [2024-12-11 13:43:54.567332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:11.993  [2024-12-11 13:43:54.567399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:11.993  [2024-12-11 13:43:54.567418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:11.993  [2024-12-11 13:43:54.567440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.993  [2024-12-11 13:43:54.580372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:11.993  [2024-12-11 13:43:54.580438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:11.993  [2024-12-11 13:43:54.580457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:11.993  [2024-12-11 13:43:54.580477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.993  [2024-12-11 13:43:54.593458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:11.993  [2024-12-11 13:43:54.593590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.993  [2024-12-11 13:43:54.606818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:11.993  [2024-12-11 13:43:54.606965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.993  [2024-12-11 13:43:54.620455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:11.993  [2024-12-11 13:43:54.620572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:11.993  passed
00:07:11.993    Test: blob_create_snapshot_power_failure ...[2024-12-11 13:43:54.659130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:11.993  [2024-12-11 13:43:54.671321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:11.993  [2024-12-11 13:43:54.695586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:11.993  [2024-12-11 13:43:54.708280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:11.993  passed
00:07:11.993    Test: blob_io_unit ...passed
00:07:12.252    Test: blob_io_unit_compatibility ...passed
00:07:12.252    Test: blob_ext_md_pages ...passed
00:07:12.252    Test: blob_esnap_io_4096_4096 ...passed
00:07:12.252    Test: blob_esnap_io_512_512 ...passed
00:07:12.252    Test: blob_esnap_io_4096_512 ...passed
00:07:12.252    Test: blob_esnap_io_512_4096 ...passed
00:07:12.252    Test: blob_esnap_clone_resize ...passed
00:07:12.252  Suite: blob_bs_nocopy_extent_16k_phys
00:07:12.516    Test: blob_open ...passed
00:07:12.516    Test: blob_create ...[2024-12-11 13:43:55.076430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:12.516  passed
00:07:12.516    Test: blob_create_loop ...passed
00:07:12.516    Test: blob_create_fail ...[2024-12-11 13:43:55.197497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:12.516  passed
00:07:12.516    Test: blob_create_internal ...passed
00:07:12.516    Test: blob_create_zero_extent ...passed
00:07:12.784    Test: blob_snapshot ...passed
00:07:12.784    Test: blob_clone ...passed
00:07:12.784    Test: blob_inflate ...[2024-12-11 13:43:55.380687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:12.784  passed
00:07:12.784    Test: blob_delete ...passed
00:07:12.784    Test: blob_resize_test ...[2024-12-11 13:43:55.446464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:12.784  passed
00:07:12.784    Test: blob_resize_thin_test ...passed
00:07:12.784    Test: channel_ops ...passed
00:07:12.784    Test: blob_super ...passed
00:07:13.042    Test: blob_rw_verify_iov ...passed
00:07:13.042    Test: blob_unmap ...passed
00:07:13.042    Test: blob_iter ...passed
00:07:13.042    Test: blob_parse_md ...passed
00:07:13.042    Test: bs_load_pending_removal ...passed
00:07:13.042    Test: bs_unload ...[2024-12-11 13:43:55.751400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:13.042  passed
00:07:13.042    Test: bs_usable_clusters ...passed
00:07:13.042    Test: blob_crc ...[2024-12-11 13:43:55.817438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:13.042  [2024-12-11 13:43:55.817599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:13.301  passed
00:07:13.301    Test: blob_flags ...passed
00:07:13.301    Test: bs_version ...passed
00:07:13.301    Test: blob_set_xattrs_test ...[2024-12-11 13:43:55.917742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:13.301  [2024-12-11 13:43:55.917840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:13.301  passed
00:07:13.301    Test: blob_thin_prov_alloc ...passed
00:07:13.558    Test: blob_insert_cluster_msg_test ...passed
00:07:13.558    Test: blob_thin_prov_rw ...passed
00:07:13.558    Test: blob_thin_prov_rle ...passed
00:07:13.558    Test: blob_thin_prov_rw_iov ...passed
00:07:13.558    Test: blob_snapshot_rw ...passed
00:07:13.558    Test: blob_snapshot_rw_iov ...passed
00:07:13.817    Test: blob_inflate_rw ...passed
00:07:13.817    Test: blob_snapshot_freeze_io ...passed
00:07:14.075    Test: blob_operation_split_rw ...passed
00:07:14.332    Test: blob_operation_split_rw_iov ...passed
00:07:14.332    Test: blob_simultaneous_operations ...[2024-12-11 13:43:56.902542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.332  [2024-12-11 13:43:56.902630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.332  [2024-12-11 13:43:56.904004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.332  [2024-12-11 13:43:56.904049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.332  [2024-12-11 13:43:56.919862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.332  [2024-12-11 13:43:56.919937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.332  [2024-12-11 13:43:56.920094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.332  [2024-12-11 13:43:56.920112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.332  passed
00:07:14.332    Test: blob_persist_test ...passed
00:07:14.332    Test: blob_decouple_snapshot ...passed
00:07:14.332    Test: blob_seek_io_unit ...passed
00:07:14.591    Test: blob_nested_freezes ...passed
00:07:14.591    Test: blob_clone_resize ...passed
00:07:14.591    Test: blob_shallow_copy ...[2024-12-11 13:43:57.203439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:14.591  [2024-12-11 13:43:57.206255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7385:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:14.591  [2024-12-11 13:43:57.206457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7393:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:14.591  passed
00:07:14.591  Suite: blob_blob_nocopy_extent_16k_phys
00:07:14.591    Test: blob_write ...passed
00:07:14.591    Test: blob_read ...passed
00:07:14.591    Test: blob_rw_verify ...passed
00:07:14.591    Test: blob_rw_verify_iov_nomem ...passed
00:07:14.849    Test: blob_rw_iov_read_only ...passed
00:07:14.849    Test: blob_xattr ...passed
00:07:14.849    Test: blob_dirty_shutdown ...passed
00:07:14.849    Test: blob_is_degraded ...passed
00:07:14.849  Suite: blob_esnap_bs_nocopy_extent_16k_phys
00:07:14.849    Test: blob_esnap_create ...passed
00:07:14.849    Test: blob_esnap_thread_add_remove ...passed
00:07:14.849    Test: blob_esnap_clone_snapshot ...passed
00:07:14.849    Test: blob_esnap_clone_inflate ...passed
00:07:15.140    Test: blob_esnap_clone_decouple ...passed
00:07:15.140    Test: blob_esnap_clone_reload ...passed
00:07:15.140    Test: blob_esnap_hotplug ...passed
00:07:15.140    Test: blob_set_parent ...[2024-12-11 13:43:57.754360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:15.140  [2024-12-11 13:43:57.754458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:15.140  [2024-12-11 13:43:57.754670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:15.140  [2024-12-11 13:43:57.754705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:15.140  [2024-12-11 13:43:57.755436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:15.140  passed
00:07:15.140    Test: blob_set_external_parent ...[2024-12-11 13:43:57.790887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:15.140  [2024-12-11 13:43:57.790981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7839:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 258048 is not an integer multiple of cluster size 65536
00:07:15.140  [2024-12-11 13:43:57.791001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:15.140  [2024-12-11 13:43:57.791600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:15.140  passed
00:07:15.140  Suite: blob_copy_noextent
00:07:15.140    Test: blob_init ...[2024-12-11 13:43:57.803274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5527:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:15.140  passed
00:07:15.140    Test: blob_thin_provision ...passed
00:07:15.140    Test: blob_read_only ...passed
00:07:15.140    Test: bs_load ...[2024-12-11 13:43:57.846908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:15.140  passed
00:07:15.140    Test: bs_load_custom_cluster_size ...passed
00:07:15.140    Test: bs_load_after_failed_grow ...passed
00:07:15.140    Test: bs_load_error ...passed
00:07:15.140    Test: bs_cluster_sz ...[2024-12-11 13:43:57.880328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:15.140  [2024-12-11 13:43:57.880496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5661:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:15.140  [2024-12-11 13:43:57.880528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3839:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:07:15.140  passed
00:07:15.140    Test: bs_resize_md ...passed
00:07:15.140    Test: bs_destroy ...passed
00:07:15.399    Test: bs_type ...passed
00:07:15.399    Test: bs_super_block ...passed
00:07:15.399    Test: bs_test_recover_cluster_count ...passed
00:07:15.399    Test: bs_grow_live ...passed
00:07:15.399    Test: bs_grow_live_no_space ...passed
00:07:15.399    Test: bs_test_grow ...passed
00:07:15.399    Test: blob_serialize_test ...passed
00:07:15.399    Test: super_block_crc ...passed
00:07:15.399    Test: blob_thin_prov_write_count_io ...passed
00:07:15.399    Test: blob_thin_prov_unmap_cluster ...passed
00:07:15.399    Test: bs_load_iter_test ...passed
00:07:15.399    Test: blob_relations ...[2024-12-11 13:43:58.067380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.399  [2024-12-11 13:43:58.067469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.399  [2024-12-11 13:43:58.067999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.399  [2024-12-11 13:43:58.068031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.399  passed
00:07:15.399    Test: blob_relations2 ...[2024-12-11 13:43:58.080715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.399  [2024-12-11 13:43:58.080789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.399  [2024-12-11 13:43:58.080810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.399  [2024-12-11 13:43:58.080821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.399  [2024-12-11 13:43:58.081697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.399  [2024-12-11 13:43:58.081741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.399  [2024-12-11 13:43:58.082003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.399  [2024-12-11 13:43:58.082032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.399  passed
00:07:15.399    Test: blob_relations3 ...passed
00:07:15.659    Test: blobstore_clean_power_failure ...passed
00:07:15.659    Test: blob_delete_snapshot_power_failure ...[2024-12-11 13:43:58.226309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:15.659  [2024-12-11 13:43:58.237386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:15.659  [2024-12-11 13:43:58.237474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:15.659  [2024-12-11 13:43:58.237494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.659  [2024-12-11 13:43:58.248841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:15.659  [2024-12-11 13:43:58.248912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:15.659  [2024-12-11 13:43:58.248928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:15.659  [2024-12-11 13:43:58.248947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.659  [2024-12-11 13:43:58.260402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:15.659  [2024-12-11 13:43:58.260494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.659  [2024-12-11 13:43:58.272095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:15.659  [2024-12-11 13:43:58.272211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.659  [2024-12-11 13:43:58.283914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:15.659  [2024-12-11 13:43:58.284002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.659  passed
00:07:15.659    Test: blob_create_snapshot_power_failure ...[2024-12-11 13:43:58.324418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:15.659  [2024-12-11 13:43:58.346288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:15.659  [2024-12-11 13:43:58.357420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:15.659  passed
00:07:15.659    Test: blob_io_unit ...passed
00:07:15.659    Test: blob_io_unit_compatibility ...passed
00:07:15.659    Test: blob_ext_md_pages ...passed
00:07:15.917    Test: blob_esnap_io_4096_4096 ...passed
00:07:15.917    Test: blob_esnap_io_512_512 ...passed
00:07:15.917    Test: blob_esnap_io_4096_512 ...passed
00:07:15.917    Test: blob_esnap_io_512_4096 ...passed
00:07:15.917    Test: blob_esnap_clone_resize ...passed
00:07:15.917  Suite: blob_bs_copy_noextent
00:07:15.917    Test: blob_open ...passed
00:07:15.917    Test: blob_create ...[2024-12-11 13:43:58.610376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:15.917  passed
00:07:15.917    Test: blob_create_loop ...passed
00:07:16.176    Test: blob_create_fail ...[2024-12-11 13:43:58.701465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:16.176  passed
00:07:16.176    Test: blob_create_internal ...passed
00:07:16.176    Test: blob_create_zero_extent ...passed
00:07:16.176    Test: blob_snapshot ...passed
00:07:16.176    Test: blob_clone ...passed
00:07:16.176    Test: blob_inflate ...[2024-12-11 13:43:58.862783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:16.176  passed
00:07:16.176    Test: blob_delete ...passed
00:07:16.176    Test: blob_resize_test ...[2024-12-11 13:43:58.923849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:16.176  passed
00:07:16.434    Test: blob_resize_thin_test ...passed
00:07:16.434    Test: channel_ops ...passed
00:07:16.434    Test: blob_super ...passed
00:07:16.434    Test: blob_rw_verify_iov ...passed
00:07:16.434    Test: blob_unmap ...passed
00:07:16.434    Test: blob_iter ...passed
00:07:16.434    Test: blob_parse_md ...passed
00:07:16.434    Test: bs_load_pending_removal ...passed
00:07:16.434    Test: bs_unload ...[2024-12-11 13:43:59.205426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:16.692  passed
00:07:16.693    Test: bs_usable_clusters ...passed
00:07:16.693    Test: blob_crc ...[2024-12-11 13:43:59.266492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:16.693  [2024-12-11 13:43:59.266591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:16.693  passed
00:07:16.693    Test: blob_flags ...passed
00:07:16.693    Test: bs_version ...passed
00:07:16.693    Test: blob_set_xattrs_test ...[2024-12-11 13:43:59.360790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:16.693  [2024-12-11 13:43:59.360875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:16.693  passed
00:07:16.951    Test: blob_thin_prov_alloc ...passed
00:07:16.951    Test: blob_insert_cluster_msg_test ...passed
00:07:16.951    Test: blob_thin_prov_rw ...passed
00:07:16.951    Test: blob_thin_prov_rle ...passed
00:07:16.951    Test: blob_thin_prov_rw_iov ...passed
00:07:16.951    Test: blob_snapshot_rw ...passed
00:07:16.951    Test: blob_snapshot_rw_iov ...passed
00:07:17.209    Test: blob_inflate_rw ...passed
00:07:17.209    Test: blob_snapshot_freeze_io ...passed
00:07:17.467    Test: blob_operation_split_rw ...passed
00:07:17.467    Test: blob_operation_split_rw_iov ...passed
00:07:17.726    Test: blob_simultaneous_operations ...[2024-12-11 13:44:00.245880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.726  [2024-12-11 13:44:00.245963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.726  [2024-12-11 13:44:00.246406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.726  [2024-12-11 13:44:00.246445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.726  [2024-12-11 13:44:00.249107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.726  [2024-12-11 13:44:00.249154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.726  [2024-12-11 13:44:00.249235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.726  [2024-12-11 13:44:00.249251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.726  passed
00:07:17.726    Test: blob_persist_test ...passed
00:07:17.726    Test: blob_decouple_snapshot ...passed
00:07:17.726    Test: blob_seek_io_unit ...passed
00:07:17.726    Test: blob_nested_freezes ...passed
00:07:17.726    Test: blob_clone_resize ...passed
00:07:17.726    Test: blob_shallow_copy ...[2024-12-11 13:44:00.480031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:17.726  [2024-12-11 13:44:00.480269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7385:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:17.726  [2024-12-11 13:44:00.480413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7393:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:17.726  passed
00:07:17.726  Suite: blob_blob_copy_noextent
00:07:17.985    Test: blob_write ...passed
00:07:17.985    Test: blob_read ...passed
00:07:17.985    Test: blob_rw_verify ...passed
00:07:17.985    Test: blob_rw_verify_iov_nomem ...passed
00:07:17.985    Test: blob_rw_iov_read_only ...passed
00:07:17.985    Test: blob_xattr ...passed
00:07:17.985    Test: blob_dirty_shutdown ...passed
00:07:17.985    Test: blob_is_degraded ...passed
00:07:17.985  Suite: blob_esnap_bs_copy_noextent
00:07:18.243    Test: blob_esnap_create ...passed
00:07:18.244    Test: blob_esnap_thread_add_remove ...passed
00:07:18.244    Test: blob_esnap_clone_snapshot ...passed
00:07:18.244    Test: blob_esnap_clone_inflate ...passed
00:07:18.244    Test: blob_esnap_clone_decouple ...passed
00:07:18.244    Test: blob_esnap_clone_reload ...passed
00:07:18.244    Test: blob_esnap_hotplug ...passed
00:07:18.244    Test: blob_set_parent ...[2024-12-11 13:44:00.997948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:18.244  [2024-12-11 13:44:00.998032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:18.244  [2024-12-11 13:44:00.998143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:18.244  [2024-12-11 13:44:00.998170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:18.244  [2024-12-11 13:44:00.998648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:18.244  passed
00:07:18.502    Test: blob_set_external_parent ...[2024-12-11 13:44:01.030821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:18.502  [2024-12-11 13:44:01.030916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7839:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:18.502  [2024-12-11 13:44:01.030936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:18.502  [2024-12-11 13:44:01.031327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:18.502  passed
00:07:18.502  Suite: blob_copy_extent
00:07:18.502    Test: blob_init ...[2024-12-11 13:44:01.042300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5527:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:18.502  passed
00:07:18.502    Test: blob_thin_provision ...passed
00:07:18.502    Test: blob_read_only ...passed
00:07:18.502    Test: bs_load ...[2024-12-11 13:44:01.086357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:18.502  passed
00:07:18.502    Test: bs_load_custom_cluster_size ...passed
00:07:18.502    Test: bs_load_after_failed_grow ...passed
00:07:18.502    Test: bs_load_error ...passed
00:07:18.502    Test: bs_cluster_sz ...[2024-12-11 13:44:01.121847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:18.502  [2024-12-11 13:44:01.122074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5661:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:18.502  [2024-12-11 13:44:01.122113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3839:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:07:18.502  passed
00:07:18.502    Test: bs_resize_md ...passed
00:07:18.502    Test: bs_destroy ...passed
00:07:18.502    Test: bs_type ...passed
00:07:18.502    Test: bs_super_block ...passed
00:07:18.502    Test: bs_test_recover_cluster_count ...passed
00:07:18.502    Test: bs_grow_live ...passed
00:07:18.502    Test: bs_grow_live_no_space ...passed
00:07:18.502    Test: bs_test_grow ...passed
00:07:18.502    Test: blob_serialize_test ...passed
00:07:18.502    Test: super_block_crc ...passed
00:07:18.502    Test: blob_thin_prov_write_count_io ...passed
00:07:18.761    Test: blob_thin_prov_unmap_cluster ...passed
00:07:18.761    Test: bs_load_iter_test ...passed
00:07:18.761    Test: blob_relations ...[2024-12-11 13:44:01.314514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:18.761  [2024-12-11 13:44:01.314600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  [2024-12-11 13:44:01.315536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:18.761  [2024-12-11 13:44:01.315573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  passed
00:07:18.761    Test: blob_relations2 ...[2024-12-11 13:44:01.329759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:18.761  [2024-12-11 13:44:01.329835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  [2024-12-11 13:44:01.329875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:18.761  [2024-12-11 13:44:01.329888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  [2024-12-11 13:44:01.331264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:18.761  [2024-12-11 13:44:01.331312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  [2024-12-11 13:44:01.331758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:18.761  [2024-12-11 13:44:01.331791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  passed
00:07:18.761    Test: blob_relations3 ...passed
00:07:18.761    Test: blobstore_clean_power_failure ...passed
00:07:18.761    Test: blob_delete_snapshot_power_failure ...[2024-12-11 13:44:01.486365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:18.761  [2024-12-11 13:44:01.498847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:18.761  [2024-12-11 13:44:01.511217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:18.761  [2024-12-11 13:44:01.511295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:18.761  [2024-12-11 13:44:01.511317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  [2024-12-11 13:44:01.523618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:18.761  [2024-12-11 13:44:01.523707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:18.761  [2024-12-11 13:44:01.523726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:18.761  [2024-12-11 13:44:01.523747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.761  [2024-12-11 13:44:01.536144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:18.761  [2024-12-11 13:44:01.536243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:18.761  [2024-12-11 13:44:01.536262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:18.761  [2024-12-11 13:44:01.536284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.020  [2024-12-11 13:44:01.548787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:19.020  [2024-12-11 13:44:01.548885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.020  [2024-12-11 13:44:01.561367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:19.020  [2024-12-11 13:44:01.561482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.020  [2024-12-11 13:44:01.574102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:19.020  [2024-12-11 13:44:01.574199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.020  passed
00:07:19.020    Test: blob_create_snapshot_power_failure ...[2024-12-11 13:44:01.611037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:19.020  [2024-12-11 13:44:01.622961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:19.020  [2024-12-11 13:44:01.646718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:19.020  [2024-12-11 13:44:01.658931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:19.020  passed
00:07:19.020    Test: blob_io_unit ...passed
00:07:19.020    Test: blob_io_unit_compatibility ...passed
00:07:19.020    Test: blob_ext_md_pages ...passed
00:07:19.020    Test: blob_esnap_io_4096_4096 ...passed
00:07:19.020    Test: blob_esnap_io_512_512 ...passed
00:07:19.279    Test: blob_esnap_io_4096_512 ...passed
00:07:19.279    Test: blob_esnap_io_512_4096 ...passed
00:07:19.279    Test: blob_esnap_clone_resize ...passed
00:07:19.279  Suite: blob_bs_copy_extent
00:07:19.279    Test: blob_open ...passed
00:07:19.279    Test: blob_create ...[2024-12-11 13:44:01.933566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:19.279  passed
00:07:19.279    Test: blob_create_loop ...passed
00:07:19.279    Test: blob_create_fail ...[2024-12-11 13:44:02.041538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:19.279  passed
00:07:19.538    Test: blob_create_internal ...passed
00:07:19.538    Test: blob_create_zero_extent ...passed
00:07:19.538    Test: blob_snapshot ...passed
00:07:19.538    Test: blob_clone ...passed
00:07:19.538    Test: blob_inflate ...[2024-12-11 13:44:02.210899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:19.538  passed
00:07:19.538    Test: blob_delete ...passed
00:07:19.538    Test: blob_resize_test ...[2024-12-11 13:44:02.275167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:19.538  passed
00:07:19.796    Test: blob_resize_thin_test ...passed
00:07:19.796    Test: channel_ops ...passed
00:07:19.796    Test: blob_super ...passed
00:07:19.796    Test: blob_rw_verify_iov ...passed
00:07:19.796    Test: blob_unmap ...passed
00:07:19.796    Test: blob_iter ...passed
00:07:19.796    Test: blob_parse_md ...passed
00:07:19.796    Test: bs_load_pending_removal ...passed
00:07:19.796    Test: bs_unload ...[2024-12-11 13:44:02.571207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:20.054  passed
00:07:20.054    Test: bs_usable_clusters ...passed
00:07:20.054    Test: blob_crc ...[2024-12-11 13:44:02.637315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:20.054  [2024-12-11 13:44:02.637420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:20.054  passed
00:07:20.054    Test: blob_flags ...passed
00:07:20.054    Test: bs_version ...passed
00:07:20.054    Test: blob_set_xattrs_test ...[2024-12-11 13:44:02.736737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:20.054  [2024-12-11 13:44:02.736816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6370:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:20.054  passed
00:07:20.312    Test: blob_thin_prov_alloc ...passed
00:07:20.312    Test: blob_insert_cluster_msg_test ...passed
00:07:20.312    Test: blob_thin_prov_rw ...passed
00:07:20.312    Test: blob_thin_prov_rle ...passed
00:07:20.312    Test: blob_thin_prov_rw_iov ...passed
00:07:20.312    Test: blob_snapshot_rw ...passed
00:07:20.312    Test: blob_snapshot_rw_iov ...passed
00:07:20.570    Test: blob_inflate_rw ...passed
00:07:20.865    Test: blob_snapshot_freeze_io ...passed
00:07:20.866    Test: blob_operation_split_rw ...passed
00:07:21.147    Test: blob_operation_split_rw_iov ...passed
00:07:21.147    Test: blob_simultaneous_operations ...[2024-12-11 13:44:03.689268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:21.147  [2024-12-11 13:44:03.689355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:21.147  [2024-12-11 13:44:03.689840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:21.147  [2024-12-11 13:44:03.689882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:21.147  [2024-12-11 13:44:03.692529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:21.147  [2024-12-11 13:44:03.692579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:21.147  [2024-12-11 13:44:03.692691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:21.147  [2024-12-11 13:44:03.692709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:21.147  passed
00:07:21.147    Test: blob_persist_test ...passed
00:07:21.147    Test: blob_decouple_snapshot ...passed
00:07:21.147    Test: blob_seek_io_unit ...passed
00:07:21.147    Test: blob_nested_freezes ...passed
00:07:21.147    Test: blob_clone_resize ...passed
00:07:21.405    Test: blob_shallow_copy ...[2024-12-11 13:44:03.934985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:21.405  [2024-12-11 13:44:03.935257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7385:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:21.405  [2024-12-11 13:44:03.935427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7393:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:21.405  passed
00:07:21.405  Suite: blob_blob_copy_extent
00:07:21.405    Test: blob_write ...passed
00:07:21.406    Test: blob_read ...passed
00:07:21.406    Test: blob_rw_verify ...passed
00:07:21.406    Test: blob_rw_verify_iov_nomem ...passed
00:07:21.406    Test: blob_rw_iov_read_only ...passed
00:07:21.406    Test: blob_xattr ...passed
00:07:21.663    Test: blob_dirty_shutdown ...passed
00:07:21.663    Test: blob_is_degraded ...passed
00:07:21.663  Suite: blob_esnap_bs_copy_extent
00:07:21.663    Test: blob_esnap_create ...passed
00:07:21.663    Test: blob_esnap_thread_add_remove ...passed
00:07:21.663    Test: blob_esnap_clone_snapshot ...passed
00:07:21.663    Test: blob_esnap_clone_inflate ...passed
00:07:21.663    Test: blob_esnap_clone_decouple ...passed
00:07:21.663    Test: blob_esnap_clone_reload ...passed
00:07:21.922    Test: blob_esnap_hotplug ...passed
00:07:21.922    Test: blob_set_parent ...[2024-12-11 13:44:04.474870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:21.922  [2024-12-11 13:44:04.474971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:21.922  [2024-12-11 13:44:04.475108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:21.922  [2024-12-11 13:44:04.475137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:21.922  [2024-12-11 13:44:04.475663] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:21.922  passed
00:07:21.922    Test: blob_set_external_parent ...[2024-12-11 13:44:04.508714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:21.922  [2024-12-11 13:44:04.508794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7839:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:21.922  [2024-12-11 13:44:04.508815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:21.922  [2024-12-11 13:44:04.509230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:21.922  passed
00:07:21.922  
00:07:21.922  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.922                suites     20     20    n/a      0        0
00:07:21.922                 tests    475    475    475      0        0
00:07:21.922               asserts 205372 205372 205372      0      n/a
00:07:21.922  
00:07:21.922  Elapsed time =   17.820 seconds
00:07:21.922   13:44:04 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut
00:07:21.922  
00:07:21.922  
00:07:21.922       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.922       http://cunit.sourceforge.net/
00:07:21.922  
00:07:21.922  
00:07:21.922  Suite: blob_bdev
00:07:21.922    Test: create_bs_dev ...passed
00:07:21.922    Test: create_bs_dev_ro ...[2024-12-11 13:44:04.639946] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 540:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options
00:07:21.922  passed
00:07:21.922    Test: create_bs_dev_rw ...passed
00:07:21.922    Test: claim_bs_dev ...[2024-12-11 13:44:04.640410] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 350:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev
00:07:21.922  passed
00:07:21.922    Test: claim_bs_dev_ro ...passed
00:07:21.922    Test: deferred_destroy_refs ...passed
00:07:21.922    Test: deferred_destroy_channels ...passed
00:07:21.922    Test: deferred_destroy_threads ...passed
00:07:21.922  
00:07:21.922  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.922                suites      1      1    n/a      0        0
00:07:21.922                 tests      8      8      8      0        0
00:07:21.922               asserts    119    119    119      0      n/a
00:07:21.922  
00:07:21.922  Elapsed time =    0.001 seconds
00:07:21.922   13:44:04 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut
00:07:21.922  
00:07:21.922  
00:07:21.922       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.922       http://cunit.sourceforge.net/
00:07:21.922  
00:07:21.922  
00:07:21.922  Suite: tree
00:07:21.922    Test: blobfs_tree_op_test ...passed
00:07:21.922  
00:07:21.922  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.922                suites      1      1    n/a      0        0
00:07:21.922                 tests      1      1      1      0        0
00:07:21.922               asserts     27     27     27      0      n/a
00:07:21.922  
00:07:21.922  Elapsed time =    0.000 seconds
00:07:21.922   13:44:04 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut
00:07:22.180  
00:07:22.180  
00:07:22.180       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.180       http://cunit.sourceforge.net/
00:07:22.180  
00:07:22.180  
00:07:22.180  Suite: blobfs_async_ut
00:07:22.180    Test: fs_init ...passed
00:07:22.180    Test: fs_open ...passed
00:07:22.180    Test: fs_create ...passed
00:07:22.180    Test: fs_truncate ...passed
00:07:22.180    Test: fs_rename ...[2024-12-11 13:44:04.867056] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted
00:07:22.180  passed
00:07:22.180    Test: fs_rw_async ...passed
00:07:22.180    Test: fs_writev_readv_async ...passed
00:07:22.180    Test: tree_find_buffer_ut ...passed
00:07:22.180    Test: channel_ops ...passed
00:07:22.180    Test: channel_ops_sync ...passed
00:07:22.180  
00:07:22.181  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.181                suites      1      1    n/a      0        0
00:07:22.181                 tests     10     10     10      0        0
00:07:22.181               asserts    292    292    292      0      n/a
00:07:22.181  
00:07:22.181  Elapsed time =    0.211 seconds
00:07:22.439   13:44:04 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut
00:07:22.439  
00:07:22.439  
00:07:22.439       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.439       http://cunit.sourceforge.net/
00:07:22.439  
00:07:22.439  
00:07:22.439  Suite: blobfs_sync_ut
00:07:22.439    Test: cache_read_after_write ...[2024-12-11 13:44:05.087022] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted
00:07:22.439  passed
00:07:22.439    Test: file_length ...passed
00:07:22.439    Test: append_write_to_extend_blob ...passed
00:07:22.439    Test: partial_buffer ...passed
00:07:22.439    Test: cache_write_null_buffer ...passed
00:07:22.439    Test: fs_create_sync ...passed
00:07:22.439    Test: fs_rename_sync ...passed
00:07:22.439    Test: cache_append_no_cache ...passed
00:07:22.439    Test: fs_delete_file_without_close ...passed
00:07:22.439  
00:07:22.439  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.439                suites      1      1    n/a      0        0
00:07:22.439                 tests      9      9      9      0        0
00:07:22.439               asserts    345    345    345      0      n/a
00:07:22.439  
00:07:22.439  Elapsed time =    0.413 seconds
00:07:22.697   13:44:05 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut
00:07:22.697  
00:07:22.697  
00:07:22.697       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.697       http://cunit.sourceforge.net/
00:07:22.697  
00:07:22.697  
00:07:22.697  Suite: blobfs_bdev_ut
00:07:22.697    Test: spdk_blobfs_bdev_detect_test ...[2024-12-11 13:44:05.291775] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:22.697  passed
00:07:22.697    Test: spdk_blobfs_bdev_create_test ...[2024-12-11 13:44:05.292124] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:22.697  passed
00:07:22.697    Test: spdk_blobfs_bdev_mount_test ...passed
00:07:22.697  
00:07:22.697  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.697                suites      1      1    n/a      0        0
00:07:22.697                 tests      3      3      3      0        0
00:07:22.697               asserts      9      9      9      0      n/a
00:07:22.697  
00:07:22.697  Elapsed time =    0.001 seconds
00:07:22.697  
00:07:22.697  real	0m18.641s
00:07:22.697  user	0m17.598s
00:07:22.697  sys	0m1.258s
00:07:22.697   13:44:05 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.697   13:44:05 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x
00:07:22.697  ************************************
00:07:22.697  END TEST unittest_blob_blobfs
00:07:22.697  ************************************
00:07:22.697   13:44:05 unittest -- unit/unittest.sh@216 -- # run_test unittest_event unittest_event
00:07:22.697   13:44:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.697   13:44:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.697   13:44:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:22.697  ************************************
00:07:22.697  START TEST unittest_event
00:07:22.697  ************************************
00:07:22.697   13:44:05 unittest.unittest_event -- common/autotest_common.sh@1129 -- # unittest_event
00:07:22.697   13:44:05 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut
00:07:22.697  
00:07:22.697  
00:07:22.697       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.697       http://cunit.sourceforge.net/
00:07:22.697  
00:07:22.697  
00:07:22.697  Suite: app_suite
00:07:22.697    Test: test_spdk_app_parse_args ...app_ut [options]
00:07:22.697  
00:07:22.697  CPU options:
00:07:22.697   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:22.697                                   (like [0,1,10])
00:07:22.697       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:22.697  app_ut: invalid option -- 'z'
00:07:22.697                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:22.697                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:22.697                             Within the group, '-' is used for range separator,
00:07:22.697                             ',' is used for single number separator.
00:07:22.697                             '( )' can be omitted for single element group,
00:07:22.697                             '@' can be omitted if cpus and lcores have the same value
00:07:22.697       --disable-cpumask-locks    Disable CPU core lock files.
00:07:22.697       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:22.697                             pollers in the app support interrupt mode)
00:07:22.697   -p, --main-core <id>      main (primary) core for DPDK
00:07:22.697  
00:07:22.697  Configuration options:
00:07:22.697   -c, --config, --json  <config>     JSON config file
00:07:22.697   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:22.697       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:22.697       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:22.697       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:22.697       --json-ignore-init-errors    don't exit on invalid config entry
00:07:22.697  
00:07:22.697  Memory options:
00:07:22.697       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:22.697       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:22.697       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:22.697   -R, --huge-unlink         unlink huge files after initialization
00:07:22.697   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:22.697   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:22.697       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:22.697       --no-huge             run without using hugepages
00:07:22.697       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:22.697   -i, --shm-id <id>         shared memory ID (optional)
00:07:22.697   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:22.697  
00:07:22.697  PCI options:
00:07:22.697   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:22.697   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:22.697   -u, --no-pci              disable PCI access
00:07:22.697       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:22.697  
00:07:22.697  Log options:
00:07:22.697   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:22.697       --silence-noticelog   disable notice level logging to stderr
00:07:22.697  
00:07:22.697  Trace options:
00:07:22.697       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:22.697                                   setting 0 to disable trace (default 32768)
00:07:22.697                                   Tracepoints vary in size and can use more than one trace entry.
00:07:22.697   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:22.697                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:22.697                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:22.697                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:22.697                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:22.698                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:22.698                             in /include/spdk_internal/trace_defs.h
00:07:22.698  
00:07:22.698  Other options:
00:07:22.698   -h, --help                show this usage
00:07:22.698   -v, --version             print SPDK version
00:07:22.698   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:22.698       --env-context         Opaque context for use of the env implementation
00:07:22.698  app_ut [options]
00:07:22.698  
00:07:22.698  CPU options:
00:07:22.698   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:22.698                                   (like [0,1,10])
00:07:22.698       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:22.698                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:22.698                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:22.698                             Within the group, '-' is used for range separator,
00:07:22.698                             ',' is used for single number separator.
00:07:22.698                             '( )' can be omitted for single element group,
00:07:22.698                             '@' can be omitted if cpus and lcores have the same value
00:07:22.698       --disable-cpumask-locks    Disable CPU core lock files.
00:07:22.698       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:22.698                             pollers in the app support interrupt mode)
00:07:22.698   -p, --main-core <id>      main (primary) core for DPDK
00:07:22.698  
00:07:22.698  Configuration options:
00:07:22.698   -c, --config, --json  <config>     JSON config file
00:07:22.698   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:22.698       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:22.698       --wait-for-rpc        wait for RPCs to initialize subsystemsapp_ut: unrecognized option '--test-long-opt'
00:07:22.698  
00:07:22.698       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:22.698       --json-ignore-init-errors    don't exit on invalid config entry
00:07:22.698  
00:07:22.698  Memory options:
00:07:22.698       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:22.698       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:22.698       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:22.698   -R, --huge-unlink         unlink huge files after initialization
00:07:22.698   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:22.698   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:22.698       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:22.698       --no-huge             run without using hugepages
00:07:22.698       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:22.698   -i, --shm-id <id>         shared memory ID (optional)
00:07:22.698   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:22.698  
00:07:22.698  PCI options:
00:07:22.698   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:22.698   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:22.698   -u, --no-pci              disable PCI access
00:07:22.698       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:22.698  
00:07:22.698  Log options:
00:07:22.698   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:22.698       --silence-noticelog   disable notice level logging to stderr
00:07:22.698  
00:07:22.698  Trace options:
00:07:22.698       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:22.698                                   setting 0 to disable trace (default 32768)
00:07:22.698                                   Tracepoints vary in size and can use more than one trace entry.
00:07:22.698   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:22.698                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:22.698                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:22.698                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:22.698                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:22.698                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:22.698                             in /include/spdk_internal/trace_defs.h
00:07:22.698  
00:07:22.698  Other options:
00:07:22.698   -h, --help                show this usage
00:07:22.698   -v, --version             print SPDK version
00:07:22.698   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:22.698       --env-context         Opaque context for use of the env implementation
00:07:22.698  [2024-12-11 13:44:05.379300] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1204:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts.
00:07:22.698  app_ut [options]
00:07:22.698  
00:07:22.698  CPU options:
00:07:22.698   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK[2024-12-11 13:44:05.379725] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1388:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time
00:07:22.698  
00:07:22.698                                   (like [0,1,10])
00:07:22.698       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:22.698                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:22.698                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:22.698                             Within the group, '-' is used for range separator,
00:07:22.698                             ',' is used for single number separator.
00:07:22.698                             '( )' can be omitted for single element group,
00:07:22.698                             '@' can be omitted if cpus and lcores have the same value
00:07:22.698       --disable-cpumask-locks    Disable CPU core lock files.
00:07:22.698       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:22.698                             pollers in the app support interrupt mode)
00:07:22.698   -p, --main-core <id>      main (primary) core for DPDK
00:07:22.698  
00:07:22.698  Configuration options:
00:07:22.698   -c, --config, --json  <config>     JSON config file
00:07:22.698   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:22.698       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:22.698       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:22.698       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:22.698       --json-ignore-init-errors    don't exit on invalid config entry
00:07:22.698  
00:07:22.698  Memory options:
00:07:22.698       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:22.698       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:22.698       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:22.698   -R, --huge-unlink         unlink huge files after initialization
00:07:22.698   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:22.698   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:22.698       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:22.698       --no-huge             run without using hugepages
00:07:22.698       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:22.698   -i, --shm-id <id>         shared memory ID (optional)
00:07:22.698   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:22.698  
00:07:22.698  PCI options:
00:07:22.698   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:22.698   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:22.698   -u, --no-pci              disable PCI access
00:07:22.698       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:22.698  
00:07:22.698  Log options:
00:07:22.698   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:22.698       --silence-noticelog   disable notice level logging to stderr
00:07:22.698  
00:07:22.698  Trace options:
00:07:22.698       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:22.698                                   setting 0 to disable trace (default 32768)
00:07:22.698                                   Tracepoints vary in size and can use more than one trace entry.
00:07:22.698   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:22.698                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:22.698                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:22.698                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:22.698                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:22.698                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:22.698                             in /include/spdk_internal/trace_defs.h
00:07:22.698  
00:07:22.698  Other options:
00:07:22.698   -h, --help                show this usage
00:07:22.698   -v, --version             print SPDK version
00:07:22.698   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:22.698       --env-context         Opaque context for use of the env implementation
00:07:22.698  [2024-12-11 13:44:05.380083] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1290:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments
00:07:22.698  passed
00:07:22.698  
00:07:22.698  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.698                suites      1      1    n/a      0        0
00:07:22.698                 tests      1      1      1      0        0
00:07:22.698               asserts      8      8      8      0      n/a
00:07:22.698  
00:07:22.698  Elapsed time =    0.002 seconds
00:07:22.698   13:44:05 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut
00:07:22.698  
00:07:22.698  
00:07:22.698       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.698       http://cunit.sourceforge.net/
00:07:22.698  
00:07:22.698  
00:07:22.698  Suite: app_suite
00:07:22.698    Test: test_create_reactor ...passed
00:07:22.698    Test: test_init_reactors ...passed
00:07:22.698    Test: test_event_call ...passed
00:07:22.698    Test: test_schedule_thread ...passed
00:07:22.698    Test: test_reschedule_thread ...passed
00:07:22.698    Test: test_bind_thread ...passed
00:07:22.698    Test: test_for_each_reactor ...passed
00:07:22.698    Test: test_reactor_stats ...passed
00:07:22.698    Test: test_scheduler ...passed
00:07:22.698    Test: test_governor ...passed
00:07:22.698    Test: test_scheduler_set_isolated_core_mask ...[2024-12-11 13:44:05.455659] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:07:22.699  [2024-12-11 13:44:05.455926] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:07:22.699  passed
00:07:22.699    Test: test_mixed_workload ...passed
00:07:22.699  
00:07:22.699  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.699                suites      1      1    n/a      0        0
00:07:22.699                 tests     12     12     12      0        0
00:07:22.699               asserts    344    344    344      0      n/a
00:07:22.699  
00:07:22.699  Elapsed time =    0.033 seconds
00:07:22.957  
00:07:22.957  real	0m0.125s
00:07:22.957  user	0m0.059s
00:07:22.957  sys	0m0.066s
00:07:22.957   13:44:05 unittest.unittest_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.957   13:44:05 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x
00:07:22.957  ************************************
00:07:22.957  END TEST unittest_event
00:07:22.957  ************************************
00:07:22.957    13:44:05 unittest -- unit/unittest.sh@217 -- # uname -s
00:07:22.957   13:44:05 unittest -- unit/unittest.sh@217 -- # '[' Linux = Linux ']'
00:07:22.957   13:44:05 unittest -- unit/unittest.sh@218 -- # run_test unittest_ftl unittest_ftl
00:07:22.957   13:44:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.957   13:44:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.957   13:44:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:22.957  ************************************
00:07:22.957  START TEST unittest_ftl
00:07:22.957  ************************************
00:07:22.957   13:44:05 unittest.unittest_ftl -- common/autotest_common.sh@1129 -- # unittest_ftl
00:07:22.957   13:44:05 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut
00:07:22.957  
00:07:22.957  
00:07:22.957       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.957       http://cunit.sourceforge.net/
00:07:22.957  
00:07:22.957  
00:07:22.957  Suite: ftl_band_suite
00:07:22.957    Test: test_band_block_offset_from_addr_base ...passed
00:07:22.957    Test: test_band_block_offset_from_addr_offset ...passed
00:07:22.957    Test: test_band_addr_from_block_offset ...passed
00:07:22.957    Test: test_band_set_addr ...passed
00:07:23.215    Test: test_invalidate_addr ...passed
00:07:23.215    Test: test_next_xfer_addr ...passed
00:07:23.215  
00:07:23.215  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.215                suites      1      1    n/a      0        0
00:07:23.215                 tests      6      6      6      0        0
00:07:23.215               asserts  30356  30356  30356      0      n/a
00:07:23.215  
00:07:23.215  Elapsed time =    0.212 seconds
00:07:23.215   13:44:05 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut
00:07:23.215  
00:07:23.215  
00:07:23.215       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.215       http://cunit.sourceforge.net/
00:07:23.215  
00:07:23.215  
00:07:23.215  Suite: ftl_bitmap
00:07:23.215    Test: test_ftl_bitmap_create ...[2024-12-11 13:44:05.876526] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes
00:07:23.215  [2024-12-11 13:44:05.876766] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes
00:07:23.215  passed
00:07:23.215    Test: test_ftl_bitmap_get ...passed
00:07:23.215    Test: test_ftl_bitmap_set ...passed
00:07:23.215    Test: test_ftl_bitmap_clear ...passed
00:07:23.215    Test: test_ftl_bitmap_find_first_set ...passed
00:07:23.215    Test: test_ftl_bitmap_find_first_clear ...passed
00:07:23.215    Test: test_ftl_bitmap_count_set ...passed
00:07:23.215  
00:07:23.215  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.215                suites      1      1    n/a      0        0
00:07:23.215                 tests      7      7      7      0        0
00:07:23.215               asserts    137    137    137      0      n/a
00:07:23.215  
00:07:23.215  Elapsed time =    0.001 seconds
00:07:23.215   13:44:05 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut
00:07:23.215  
00:07:23.215  
00:07:23.215       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.215       http://cunit.sourceforge.net/
00:07:23.215  
00:07:23.215  
00:07:23.215  Suite: ftl_io_suite
00:07:23.215    Test: test_completion ...passed
00:07:23.215    Test: test_multiple_ios ...passed
00:07:23.215  
00:07:23.215  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.215                suites      1      1    n/a      0        0
00:07:23.215                 tests      2      2      2      0        0
00:07:23.215               asserts     47     47     47      0      n/a
00:07:23.215  
00:07:23.215  Elapsed time =    0.005 seconds
00:07:23.215   13:44:05 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut
00:07:23.215  
00:07:23.215  
00:07:23.215       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.215       http://cunit.sourceforge.net/
00:07:23.215  
00:07:23.215  
00:07:23.215  Suite: ftl_mngt
00:07:23.215    Test: test_next_step ...passed
00:07:23.215    Test: test_continue_step ...passed
00:07:23.215    Test: test_get_func_and_step_cntx_alloc ...passed
00:07:23.215    Test: test_fail_step ...passed
00:07:23.215    Test: test_mngt_call_and_call_rollback ...passed
00:07:23.215    Test: test_nested_process_failure ...passed
00:07:23.215    Test: test_call_init_success ...passed
00:07:23.215    Test: test_call_init_failure ...passed
00:07:23.215  
00:07:23.215  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.215                suites      1      1    n/a      0        0
00:07:23.215                 tests      8      8      8      0        0
00:07:23.215               asserts    196    196    196      0      n/a
00:07:23.215  
00:07:23.215  Elapsed time =    0.002 seconds
00:07:23.215   13:44:05 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut
00:07:23.215  
00:07:23.216  
00:07:23.216       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.216       http://cunit.sourceforge.net/
00:07:23.216  
00:07:23.216  
00:07:23.216  Suite: ftl_mempool
00:07:23.216    Test: test_ftl_mempool_create ...passed
00:07:23.216    Test: test_ftl_mempool_get_put ...passed
00:07:23.216  
00:07:23.216  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.216                suites      1      1    n/a      0        0
00:07:23.216                 tests      2      2      2      0        0
00:07:23.216               asserts     36     36     36      0      n/a
00:07:23.216  
00:07:23.216  Elapsed time =    0.000 seconds
00:07:23.474   13:44:06 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut
00:07:23.474  
00:07:23.474  
00:07:23.474       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.474       http://cunit.sourceforge.net/
00:07:23.474  
00:07:23.474  
00:07:23.474  Suite: ftl_addr64_suite
00:07:23.474    Test: test_addr_cached ...passed
00:07:23.474  
00:07:23.474  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.474                suites      1      1    n/a      0        0
00:07:23.474                 tests      1      1      1      0        0
00:07:23.474               asserts   1536   1536   1536      0      n/a
00:07:23.474  
00:07:23.474  Elapsed time =    0.000 seconds
00:07:23.474   13:44:06 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut
00:07:23.474  
00:07:23.474  
00:07:23.474       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.474       http://cunit.sourceforge.net/
00:07:23.474  
00:07:23.474  
00:07:23.474  Suite: ftl_sb
00:07:23.474    Test: test_sb_crc_v2 ...passed
00:07:23.474    Test: test_sb_crc_v3 ...passed
00:07:23.474    Test: test_sb_v3_md_layout ...[2024-12-11 13:44:06.052380] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions
00:07:23.474  [2024-12-11 13:44:06.052647] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:23.474  [2024-12-11 13:44:06.052684] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:23.474  [2024-12-11 13:44:06.052716] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:23.474  [2024-12-11 13:44:06.052748] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:23.474  [2024-12-11 13:44:06.052770] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found
00:07:23.474  [2024-12-11 13:44:06.052802] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:23.474  [2024-12-11 13:44:06.052829] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:23.474  [2024-12-11 13:44:06.052913] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:23.474  [2024-12-11 13:44:06.052948] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:23.474  [2024-12-11 13:44:06.052990] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:23.474  passed
00:07:23.474    Test: test_sb_v5_md_layout ...passed
00:07:23.474  
00:07:23.474  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.474                suites      1      1    n/a      0        0
00:07:23.474                 tests      4      4      4      0        0
00:07:23.474               asserts    170    170    170      0      n/a
00:07:23.474  
00:07:23.474  Elapsed time =    0.002 seconds
00:07:23.474   13:44:06 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut
00:07:23.474  
00:07:23.474  
00:07:23.474       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.474       http://cunit.sourceforge.net/
00:07:23.474  
00:07:23.474  
00:07:23.474  Suite: ftl_layout_upgrade
00:07:23.474    Test: test_l2p_upgrade ...passed
00:07:23.474  
00:07:23.474  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.474                suites      1      1    n/a      0        0
00:07:23.474                 tests      1      1      1      0        0
00:07:23.474               asserts    164    164    164      0      n/a
00:07:23.474  
00:07:23.474  Elapsed time =    0.001 seconds
00:07:23.474   13:44:06 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut
00:07:23.474  
00:07:23.474  
00:07:23.474       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.474       http://cunit.sourceforge.net/
00:07:23.474  
00:07:23.474  
00:07:23.474  Suite: ftl_p2l_suite
00:07:23.474    Test: test_p2l_num_pages ...passed
00:07:23.474    Test: test_ckpt_issue ...passed
00:07:23.474    Test: test_persist_band_p2l ...passed
00:07:23.474    Test: test_clean_restore_p2l ...passed
00:07:23.474    Test: test_dirty_restore_p2l ...passed
00:07:23.474  
00:07:23.474  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.474                suites      1      1    n/a      0        0
00:07:23.474                 tests      5      5      5      0        0
00:07:23.474               asserts  10020  10020  10020      0      n/a
00:07:23.474  
00:07:23.474  Elapsed time =    0.077 seconds
00:07:23.474  
00:07:23.474  real	0m0.689s
00:07:23.474  user	0m0.314s
00:07:23.474  sys	0m0.376s
00:07:23.474   13:44:06 unittest.unittest_ftl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.474   13:44:06 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x
00:07:23.474  ************************************
00:07:23.474  END TEST unittest_ftl
00:07:23.474  ************************************
00:07:23.732   13:44:06 unittest -- unit/unittest.sh@221 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:23.732   13:44:06 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:23.732   13:44:06 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.732   13:44:06 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:23.732  ************************************
00:07:23.732  START TEST unittest_accel
00:07:23.732  ************************************
00:07:23.732   13:44:06 unittest.unittest_accel -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:23.732  
00:07:23.732  
00:07:23.732       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.732       http://cunit.sourceforge.net/
00:07:23.732  
00:07:23.732  
00:07:23.732  Suite: accel_sequence
00:07:23.732    Test: test_sequence_fill_copy ...passed
00:07:23.732    Test: test_sequence_abort ...passed
00:07:23.732    Test: test_sequence_append_error ...passed
00:07:23.732    Test: test_sequence_completion_error ...[2024-12-11 13:44:06.309292] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2384:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7a8e951357c0
00:07:23.732  [2024-12-11 13:44:06.309595] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2384:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7a8e951357c0
00:07:23.732  [2024-12-11 13:44:06.309699] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2297:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7a8e951357c0
00:07:23.732  [2024-12-11 13:44:06.309785] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2297:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7a8e951357c0
00:07:23.732  passed
00:07:23.732    Test: test_sequence_decompress ...passed
00:07:23.732    Test: test_sequence_reverse ...passed
00:07:23.732    Test: test_sequence_copy_elision ...passed
00:07:23.732    Test: test_sequence_accel_buffers ...passed
00:07:23.732    Test: test_sequence_memory_domain ...[2024-12-11 13:44:06.324482] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2189:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7
00:07:23.732  [2024-12-11 13:44:06.324731] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2228:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98
00:07:23.732  passed
00:07:23.732    Test: test_sequence_module_memory_domain ...passed
00:07:23.732    Test: test_sequence_crypto ...passed
00:07:23.732    Test: test_sequence_driver ...[2024-12-11 13:44:06.333828] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2336:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7a8e91aa97c0 using driver: ut
00:07:23.732  [2024-12-11 13:44:06.333934] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2397:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7a8e91aa97c0 through driver: ut
00:07:23.732  passed
00:07:23.732    Test: test_sequence_same_iovs ...passed
00:07:23.732    Test: test_sequence_crc32 ...passed
00:07:23.732    Test: test_sequence_dix_generate_verify ...passed
00:07:23.732    Test: test_sequence_dix ...passed
00:07:23.732  Suite: accel
00:07:23.732    Test: test_spdk_accel_task_complete ...passed
00:07:23.732    Test: test_get_task ...passed
00:07:23.732    Test: test_spdk_accel_submit_copy ...passed
00:07:23.732    Test: test_spdk_accel_submit_dualcast ...[2024-12-11 13:44:06.346253] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:23.732  [2024-12-11 13:44:06.346334] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:23.732  passed
00:07:23.732    Test: test_spdk_accel_submit_compare ...passed
00:07:23.732    Test: test_spdk_accel_submit_fill ...passed
00:07:23.732    Test: test_spdk_accel_submit_crc32c ...passed
00:07:23.732    Test: test_spdk_accel_submit_crc32cv ...passed
00:07:23.732    Test: test_spdk_accel_submit_copy_crc32c ...passed
00:07:23.732    Test: test_spdk_accel_submit_xor ...passed
00:07:23.732    Test: test_spdk_accel_module_find_by_name ...passed
00:07:23.732    Test: test_spdk_accel_module_register ...passed
00:07:23.732  
00:07:23.732  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.732                suites      2      2    n/a      0        0
00:07:23.732                 tests     28     28     28      0        0
00:07:23.732               asserts    884    884    884      0      n/a
00:07:23.732  
00:07:23.732  Elapsed time =    0.052 seconds
00:07:23.732  
00:07:23.732  real	0m0.096s
00:07:23.732  user	0m0.048s
00:07:23.732  sys	0m0.049s
00:07:23.732   13:44:06 unittest.unittest_accel -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.732   13:44:06 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x
00:07:23.732  ************************************
00:07:23.733  END TEST unittest_accel
00:07:23.733  ************************************
00:07:23.733   13:44:06 unittest -- unit/unittest.sh@222 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:23.733   13:44:06 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:23.733   13:44:06 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.733   13:44:06 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:23.733  ************************************
00:07:23.733  START TEST unittest_ioat
00:07:23.733  ************************************
00:07:23.733   13:44:06 unittest.unittest_ioat -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:23.733  
00:07:23.733  
00:07:23.733       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.733       http://cunit.sourceforge.net/
00:07:23.733  
00:07:23.733  
00:07:23.733  Suite: ioat
00:07:23.733    Test: ioat_state_check ...passed
00:07:23.733  
00:07:23.733  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.733                suites      1      1    n/a      0        0
00:07:23.733                 tests      1      1      1      0        0
00:07:23.733               asserts     32     32     32      0      n/a
00:07:23.733  
00:07:23.733  Elapsed time =    0.000 seconds
00:07:23.733  
00:07:23.733  real	0m0.037s
00:07:23.733  user	0m0.015s
00:07:23.733  sys	0m0.022s
00:07:23.733   13:44:06 unittest.unittest_ioat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.733   13:44:06 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x
00:07:23.733  ************************************
00:07:23.733  END TEST unittest_ioat
00:07:23.733  ************************************
00:07:23.733   13:44:06 unittest -- unit/unittest.sh@223 -- # [[ y == y ]]
00:07:23.733   13:44:06 unittest -- unit/unittest.sh@224 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:23.733   13:44:06 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:23.733   13:44:06 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.733   13:44:06 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:23.733  ************************************
00:07:23.733  START TEST unittest_idxd_user
00:07:23.733  ************************************
00:07:23.733   13:44:06 unittest.unittest_idxd_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:23.991  
00:07:23.991  
00:07:23.991       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.991       http://cunit.sourceforge.net/
00:07:23.991  
00:07:23.991  
00:07:23.991  Suite: idxd_user
00:07:23.991    Test: test_idxd_wait_cmd ...[2024-12-11 13:44:06.511790] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:23.991  passed
00:07:23.991    Test: test_idxd_reset_dev ...[2024-12-11 13:44:06.511940] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1
00:07:23.991  [2024-12-11 13:44:06.512002] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:23.991  [2024-12-11 13:44:06.512029] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274
00:07:23.991  passed
00:07:23.991    Test: test_idxd_group_config ...passed
00:07:23.991    Test: test_idxd_wq_config ...passed
00:07:23.991  
00:07:23.991  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.992                suites      1      1    n/a      0        0
00:07:23.992                 tests      4      4      4      0        0
00:07:23.992               asserts     20     20     20      0      n/a
00:07:23.992  
00:07:23.992  Elapsed time =    0.000 seconds
00:07:23.992  
00:07:23.992  real	0m0.029s
00:07:23.992  user	0m0.014s
00:07:23.992  sys	0m0.015s
00:07:23.992   13:44:06 unittest.unittest_idxd_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.992   13:44:06 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x
00:07:23.992  ************************************
00:07:23.992  END TEST unittest_idxd_user
00:07:23.992  ************************************
00:07:23.992   13:44:06 unittest -- unit/unittest.sh@226 -- # run_test unittest_iscsi unittest_iscsi
00:07:23.992   13:44:06 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:23.992   13:44:06 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.992   13:44:06 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:23.992  ************************************
00:07:23.992  START TEST unittest_iscsi
00:07:23.992  ************************************
00:07:23.992   13:44:06 unittest.unittest_iscsi -- common/autotest_common.sh@1129 -- # unittest_iscsi
00:07:23.992   13:44:06 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut
00:07:23.992  
00:07:23.992  
00:07:23.992       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.992       http://cunit.sourceforge.net/
00:07:23.992  
00:07:23.992  
00:07:23.992  Suite: conn_suite
00:07:23.992    Test: read_task_split_in_order_case ...passed
00:07:23.992    Test: read_task_split_reverse_order_case ...passed
00:07:23.992    Test: propagate_scsi_error_status_for_split_read_tasks ...passed
00:07:23.992    Test: process_non_read_task_completion_test ...passed
00:07:23.992    Test: free_tasks_on_connection ...passed
00:07:23.992    Test: free_tasks_with_queued_datain ...passed
00:07:23.992    Test: abort_queued_datain_task_test ...passed
00:07:23.992    Test: abort_queued_datain_tasks_test ...passed
00:07:23.992  
00:07:23.992  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.992                suites      1      1    n/a      0        0
00:07:23.992                 tests      8      8      8      0        0
00:07:23.992               asserts    230    230    230      0      n/a
00:07:23.992  
00:07:23.992  Elapsed time =    0.000 seconds
00:07:23.992   13:44:06 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut
00:07:23.992  
00:07:23.992  
00:07:23.992       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.992       http://cunit.sourceforge.net/
00:07:23.992  
00:07:23.992  
00:07:23.992  Suite: iscsi_suite
00:07:23.992    Test: param_negotiation_test ...passed
00:07:23.992    Test: list_negotiation_test ...passed
00:07:23.992    Test: parse_valid_test ...passed
00:07:23.992    Test: parse_invalid_test ...[2024-12-11 13:44:06.656533] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:07:23.992  [2024-12-11 13:44:06.657039] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:07:23.992  [2024-12-11 13:44:06.657121] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key
00:07:23.992  [2024-12-11 13:44:06.657202] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193
00:07:23.992  [2024-12-11 13:44:06.657410] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256
00:07:23.992  [2024-12-11 13:44:06.657498] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63
00:07:23.992  [2024-12-11 13:44:06.657655] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B
00:07:23.992  passed
00:07:23.992  
00:07:23.992  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.992                suites      1      1    n/a      0        0
00:07:23.992                 tests      4      4      4      0        0
00:07:23.992               asserts    161    161    161      0      n/a
00:07:23.992  
00:07:23.992  Elapsed time =    0.010 seconds
00:07:23.992   13:44:06 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut
00:07:23.992  
00:07:23.992  
00:07:23.992       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.992       http://cunit.sourceforge.net/
00:07:23.992  
00:07:23.992  
00:07:23.992  Suite: iscsi_target_node_suite
00:07:23.992    Test: add_lun_test_cases ...[2024-12-11 13:44:06.697612] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1)
00:07:23.992  [2024-12-11 13:44:06.697851] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative
00:07:23.992  [2024-12-11 13:44:06.697903] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:23.992  [2024-12-11 13:44:06.697940] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:23.992  passed
00:07:23.992    Test: allow_any_allowed ...[2024-12-11 13:44:06.697976] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed
00:07:23.992  passed
00:07:23.992    Test: allow_ipv6_allowed ...passed
00:07:23.992    Test: allow_ipv6_denied ...passed
00:07:23.992    Test: allow_ipv6_invalid ...passed
00:07:23.992    Test: allow_ipv4_allowed ...passed
00:07:23.992    Test: allow_ipv4_denied ...passed
00:07:23.992    Test: allow_ipv4_invalid ...passed
00:07:23.992    Test: node_access_allowed ...passed
00:07:23.992    Test: node_access_denied_by_empty_netmask ...passed
00:07:23.992    Test: node_access_multi_initiator_groups_cases ...passed
00:07:23.992    Test: allow_iscsi_name_multi_maps_case ...passed
00:07:23.992    Test: chap_param_test_cases ...[2024-12-11 13:44:06.698732] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0)
00:07:23.992  [2024-12-11 13:44:06.698780] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1)
00:07:23.992  [2024-12-11 13:44:06.698815] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1)
00:07:23.992  [2024-12-11 13:44:06.698848] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1)
00:07:23.992  passed
00:07:23.992  
00:07:23.992  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.992                suites      1      1    n/a      0        0
00:07:23.992                 tests     13     13     13      0        0
00:07:23.992               asserts     50     50     50      0      n/a
00:07:23.992  
00:07:23.992  Elapsed time =    0.001 seconds
00:07:23.992  [2024-12-11 13:44:06.698891] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1)
00:07:23.992   13:44:06 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut
00:07:23.992  
00:07:23.992  
00:07:23.992       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.992       http://cunit.sourceforge.net/
00:07:23.992  
00:07:23.992  
00:07:23.992  Suite: iscsi_suite
00:07:23.992    Test: op_login_check_target_test ...[2024-12-11 13:44:06.744259] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied
00:07:23.992  passed
00:07:23.992    Test: op_login_session_normal_test ...[2024-12-11 13:44:06.744674] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:23.992  [2024-12-11 13:44:06.744746] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:23.992  [2024-12-11 13:44:06.744803] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:23.992  [2024-12-11 13:44:06.744873] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed
00:07:23.992  [2024-12-11 13:44:06.744933] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:23.992  [2024-12-11 13:44:06.745015] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0
00:07:23.992  [2024-12-11 13:44:06.745058] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:23.992  passed
00:07:23.992    Test: maxburstlength_test ...[2024-12-11 13:44:06.745457] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:23.992  passed
00:07:23.992    Test: underflow_for_read_transfer_test ...[2024-12-11 13:44:06.745510] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL)
00:07:23.992  passed
00:07:23.992    Test: underflow_for_zero_read_transfer_test ...passed
00:07:23.992    Test: underflow_for_request_sense_test ...passed
00:07:23.992    Test: underflow_for_check_condition_test ...passed
00:07:23.992    Test: add_transfer_task_test ...passed
00:07:23.992    Test: get_transfer_task_test ...passed
00:07:23.992    Test: del_transfer_task_test ...passed
00:07:23.992    Test: clear_all_transfer_tasks_test ...passed
00:07:23.992    Test: build_iovs_test ...passed
00:07:23.992    Test: build_iovs_with_md_test ...passed
00:07:23.992    Test: pdu_hdr_op_login_test ...[2024-12-11 13:44:06.747667] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error
00:07:23.992  [2024-12-11 13:44:06.747826] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0
00:07:23.992  [2024-12-11 13:44:06.747940] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2
00:07:23.992  passed
00:07:23.992    Test: pdu_hdr_op_text_test ...[2024-12-11 13:44:06.748097] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:23.992  [2024-12-11 13:44:06.748215] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue
00:07:23.992  [2024-12-11 13:44:06.748265] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678...
00:07:23.992  passed
00:07:23.992    Test: pdu_hdr_op_logout_test ...[2024-12-11 13:44:06.748408] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason.
00:07:23.992  passed
00:07:23.992    Test: pdu_hdr_op_scsi_test ...[2024-12-11 13:44:06.748569] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:23.992  [2024-12-11 13:44:06.748646] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:23.992  [2024-12-11 13:44:06.748707] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported
00:07:23.993  [2024-12-11 13:44:06.748792] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:23.993  [2024-12-11 13:44:06.748898] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67)
00:07:23.993  [2024-12-11 13:44:06.749116] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0
00:07:23.993  passed
00:07:23.993    Test: pdu_hdr_op_task_mgmt_test ...[2024-12-11 13:44:06.749305] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session
00:07:23.993  [2024-12-11 13:44:06.749380] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0
00:07:23.993  passed
00:07:23.993    Test: pdu_hdr_op_nopout_test ...[2024-12-11 13:44:06.749590] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session
00:07:23.993  [2024-12-11 13:44:06.749748] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:23.993  passed
00:07:23.993    Test: pdu_hdr_op_data_test ...[2024-12-11 13:44:06.749788] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:23.993  [2024-12-11 13:44:06.749808] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0
00:07:23.993  [2024-12-11 13:44:06.749861] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session
00:07:23.993  [2024-12-11 13:44:06.749921] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0
00:07:23.993  [2024-12-11 13:44:06.749989] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:23.993  [2024-12-11 13:44:06.750015] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1
00:07:23.993  [2024-12-11 13:44:06.750085] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error
00:07:23.993  [2024-12-11 13:44:06.750133] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error
00:07:23.993  [2024-12-11 13:44:06.750170] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535)
00:07:23.993  passed
00:07:23.993    Test: empty_text_with_cbit_test ...passed
00:07:23.993    Test: pdu_payload_read_test ...[2024-12-11 13:44:06.751950] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536)
00:07:23.993  passed
00:07:23.993    Test: data_out_pdu_sequence_test ...passed
00:07:23.993    Test: immediate_data_and_data_out_pdu_sequence_test ...passed
00:07:23.993  
00:07:23.993  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.993                suites      1      1    n/a      0        0
00:07:23.993                 tests     24     24     24      0        0
00:07:23.993               asserts 150253 150253 150253      0      n/a
00:07:23.993  
00:07:23.993  Elapsed time =    0.015 seconds
00:07:24.273   13:44:06 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut
00:07:24.273  
00:07:24.273  
00:07:24.273       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.273       http://cunit.sourceforge.net/
00:07:24.273  
00:07:24.273  
00:07:24.273  Suite: init_grp_suite
00:07:24.273    Test: create_initiator_group_success_case ...passed
00:07:24.273    Test: find_initiator_group_success_case ...passed
00:07:24.273    Test: register_initiator_group_twice_case ...passed
00:07:24.273    Test: add_initiator_name_success_case ...passed
00:07:24.273    Test: add_initiator_name_fail_case ...[2024-12-11 13:44:06.798274] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c:  54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed
00:07:24.273  passed
00:07:24.273    Test: delete_all_initiator_names_success_case ...passed
00:07:24.273    Test: add_netmask_success_case ...passed
00:07:24.273    Test: add_netmask_fail_case ...[2024-12-11 13:44:06.798607] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed
00:07:24.273  passed
00:07:24.273    Test: delete_all_netmasks_success_case ...passed
00:07:24.273    Test: initiator_name_overwrite_all_to_any_case ...passed
00:07:24.273    Test: netmask_overwrite_all_to_any_case ...passed
00:07:24.273    Test: add_delete_initiator_names_case ...passed
00:07:24.273    Test: add_duplicated_initiator_names_case ...passed
00:07:24.273    Test: delete_nonexisting_initiator_names_case ...passed
00:07:24.273    Test: add_delete_netmasks_case ...passed
00:07:24.273    Test: add_duplicated_netmasks_case ...passed
00:07:24.273    Test: delete_nonexisting_netmasks_case ...passed
00:07:24.273  
00:07:24.273  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.273                suites      1      1    n/a      0        0
00:07:24.273                 tests     17     17     17      0        0
00:07:24.273               asserts    108    108    108      0      n/a
00:07:24.273  
00:07:24.273  Elapsed time =    0.001 seconds
00:07:24.273   13:44:06 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut
00:07:24.273  
00:07:24.273  
00:07:24.273       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.273       http://cunit.sourceforge.net/
00:07:24.273  
00:07:24.273  
00:07:24.273  Suite: portal_grp_suite
00:07:24.273    Test: portal_create_ipv4_normal_case ...passed
00:07:24.273    Test: portal_create_ipv6_normal_case ...passed
00:07:24.273    Test: portal_create_ipv4_wildcard_case ...passed
00:07:24.274    Test: portal_create_ipv6_wildcard_case ...passed
00:07:24.274    Test: portal_create_twice_case ...[2024-12-11 13:44:06.832947] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists
00:07:24.274  passed
00:07:24.274    Test: portal_grp_register_unregister_case ...passed
00:07:24.274    Test: portal_grp_register_twice_case ...passed
00:07:24.274    Test: portal_grp_add_delete_case ...passed
00:07:24.274    Test: portal_grp_add_delete_twice_case ...passed
00:07:24.274  
00:07:24.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.274                suites      1      1    n/a      0        0
00:07:24.274                 tests      9      9      9      0        0
00:07:24.274               asserts     44     44     44      0      n/a
00:07:24.274  
00:07:24.274  Elapsed time =    0.005 seconds
00:07:24.274  
00:07:24.274  real	0m0.276s
00:07:24.274  user	0m0.141s
00:07:24.274  sys	0m0.137s
00:07:24.274   13:44:06 unittest.unittest_iscsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.274   13:44:06 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x
00:07:24.274  ************************************
00:07:24.274  END TEST unittest_iscsi
00:07:24.274  ************************************
00:07:24.274   13:44:06 unittest -- unit/unittest.sh@227 -- # run_test unittest_json unittest_json
00:07:24.274   13:44:06 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.274   13:44:06 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.274   13:44:06 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.274  ************************************
00:07:24.274  START TEST unittest_json
00:07:24.274  ************************************
00:07:24.274   13:44:06 unittest.unittest_json -- common/autotest_common.sh@1129 -- # unittest_json
00:07:24.274   13:44:06 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut
00:07:24.274  
00:07:24.274  
00:07:24.274       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.274       http://cunit.sourceforge.net/
00:07:24.274  
00:07:24.274  
00:07:24.274  Suite: json
00:07:24.274    Test: test_parse_literal ...passed
00:07:24.274    Test: test_parse_string_simple ...passed
00:07:24.274    Test: test_parse_string_control_chars ...passed
00:07:24.274    Test: test_parse_string_utf8 ...passed
00:07:24.274    Test: test_parse_string_escapes_twochar ...passed
00:07:24.274    Test: test_parse_string_escapes_unicode ...passed
00:07:24.274    Test: test_parse_number ...passed
00:07:24.274    Test: test_parse_array ...passed
00:07:24.274    Test: test_parse_object ...passed
00:07:24.274    Test: test_parse_nesting ...passed
00:07:24.274    Test: test_parse_comment ...passed
00:07:24.274  
00:07:24.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.274                suites      1      1    n/a      0        0
00:07:24.274                 tests     11     11     11      0        0
00:07:24.274               asserts   1516   1516   1516      0      n/a
00:07:24.274  
00:07:24.274  Elapsed time =    0.002 seconds
00:07:24.274   13:44:06 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut
00:07:24.274  
00:07:24.274  
00:07:24.274       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.274       http://cunit.sourceforge.net/
00:07:24.274  
00:07:24.274  
00:07:24.274  Suite: json
00:07:24.274    Test: test_strequal ...passed
00:07:24.274    Test: test_num_to_uint16 ...passed
00:07:24.274    Test: test_num_to_int32 ...passed
00:07:24.274    Test: test_num_to_uint64 ...passed
00:07:24.274    Test: test_decode_object ...passed
00:07:24.274    Test: test_decode_array ...passed
00:07:24.274    Test: test_decode_bool ...passed
00:07:24.274    Test: test_decode_uint16 ...passed
00:07:24.274    Test: test_decode_int32 ...passed
00:07:24.274    Test: test_decode_uint32 ...passed
00:07:24.274    Test: test_decode_uint64 ...passed
00:07:24.274    Test: test_decode_string ...passed
00:07:24.274    Test: test_decode_uuid ...passed
00:07:24.274    Test: test_find ...passed
00:07:24.274    Test: test_find_array ...passed
00:07:24.274    Test: test_iterating ...passed
00:07:24.274    Test: test_free_object ...passed
00:07:24.274  
00:07:24.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.274                suites      1      1    n/a      0        0
00:07:24.274                 tests     17     17     17      0        0
00:07:24.274               asserts    236    236    236      0      n/a
00:07:24.274  
00:07:24.274  Elapsed time =    0.001 seconds
00:07:24.274   13:44:06 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut
00:07:24.274  
00:07:24.274  
00:07:24.274       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.274       http://cunit.sourceforge.net/
00:07:24.274  
00:07:24.274  
00:07:24.274  Suite: json
00:07:24.274    Test: test_write_literal ...passed
00:07:24.274    Test: test_write_string_simple ...passed
00:07:24.274    Test: test_write_string_escapes ...passed
00:07:24.274    Test: test_write_string_utf16le ...passed
00:07:24.274    Test: test_write_number_int32 ...passed
00:07:24.274    Test: test_write_number_uint32 ...passed
00:07:24.274    Test: test_write_number_uint128 ...passed
00:07:24.274    Test: test_write_string_number_uint128 ...passed
00:07:24.274    Test: test_write_number_int64 ...passed
00:07:24.274    Test: test_write_number_uint64 ...passed
00:07:24.274    Test: test_write_number_double ...passed
00:07:24.274    Test: test_write_uuid ...passed
00:07:24.274    Test: test_write_array ...passed
00:07:24.274    Test: test_write_object ...passed
00:07:24.274    Test: test_write_nesting ...passed
00:07:24.274    Test: test_write_val ...passed
00:07:24.274  
00:07:24.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.274                suites      1      1    n/a      0        0
00:07:24.274                 tests     16     16     16      0        0
00:07:24.274               asserts    918    918    918      0      n/a
00:07:24.274  
00:07:24.274  Elapsed time =    0.006 seconds
00:07:24.274   13:44:07 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut
00:07:24.274  
00:07:24.274  
00:07:24.274       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.274       http://cunit.sourceforge.net/
00:07:24.274  
00:07:24.274  
00:07:24.274  Suite: jsonrpc
00:07:24.274    Test: test_parse_request ...passed
00:07:24.274    Test: test_parse_request_streaming ...passed
00:07:24.274  
00:07:24.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.274                suites      1      1    n/a      0        0
00:07:24.274                 tests      2      2      2      0        0
00:07:24.274               asserts    289    289    289      0      n/a
00:07:24.274  
00:07:24.274  Elapsed time =    0.005 seconds
00:07:24.274  
00:07:24.274  real	0m0.150s
00:07:24.274  user	0m0.078s
00:07:24.274  sys	0m0.074s
00:07:24.274   13:44:07 unittest.unittest_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.274   13:44:07 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x
00:07:24.274  ************************************
00:07:24.274  END TEST unittest_json
00:07:24.274  ************************************
00:07:24.533   13:44:07 unittest -- unit/unittest.sh@228 -- # run_test unittest_rpc unittest_rpc
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.533  ************************************
00:07:24.533  START TEST unittest_rpc
00:07:24.533  ************************************
00:07:24.533   13:44:07 unittest.unittest_rpc -- common/autotest_common.sh@1129 -- # unittest_rpc
00:07:24.533   13:44:07 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut
00:07:24.533  
00:07:24.533  
00:07:24.533       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.533       http://cunit.sourceforge.net/
00:07:24.533  
00:07:24.533  
00:07:24.533  Suite: rpc
00:07:24.533    Test: test_jsonrpc_handler ...passed
00:07:24.533    Test: test_spdk_rpc_is_method_allowed ...passed
00:07:24.533    Test: test_rpc_get_methods ...passed
00:07:24.533    Test: test_rpc_spdk_get_version ...passed
00:07:24.533    Test: test_spdk_rpc_listen_close ...[2024-12-11 13:44:07.104487] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed
00:07:24.533  passed
00:07:24.533    Test: test_rpc_run_multiple_servers ...passed
00:07:24.533  
00:07:24.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.533                suites      1      1    n/a      0        0
00:07:24.533                 tests      6      6      6      0        0
00:07:24.533               asserts     23     23     23      0      n/a
00:07:24.533  
00:07:24.533  Elapsed time =    0.001 seconds
00:07:24.533  
00:07:24.533  real	0m0.040s
00:07:24.533  user	0m0.024s
00:07:24.533  sys	0m0.016s
00:07:24.533   13:44:07 unittest.unittest_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.533   13:44:07 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:24.533  ************************************
00:07:24.533  END TEST unittest_rpc
00:07:24.533  ************************************
00:07:24.533   13:44:07 unittest -- unit/unittest.sh@229 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.533  ************************************
00:07:24.533  START TEST unittest_notify
00:07:24.533  ************************************
00:07:24.533   13:44:07 unittest.unittest_notify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:24.533  
00:07:24.533  
00:07:24.533       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.533       http://cunit.sourceforge.net/
00:07:24.533  
00:07:24.533  
00:07:24.533  Suite: app_suite
00:07:24.533    Test: notify ...passed
00:07:24.533  
00:07:24.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.533                suites      1      1    n/a      0        0
00:07:24.533                 tests      1      1      1      0        0
00:07:24.533               asserts     13     13     13      0      n/a
00:07:24.533  
00:07:24.533  Elapsed time =    0.000 seconds
00:07:24.533  
00:07:24.533  real	0m0.028s
00:07:24.533  user	0m0.012s
00:07:24.533  sys	0m0.016s
00:07:24.533   13:44:07 unittest.unittest_notify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.533  ************************************
00:07:24.533  END TEST unittest_notify
00:07:24.533  ************************************
00:07:24.533   13:44:07 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x
00:07:24.533   13:44:07 unittest -- unit/unittest.sh@230 -- # run_test unittest_nvme unittest_nvme
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.533   13:44:07 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.534   13:44:07 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.534  ************************************
00:07:24.534  START TEST unittest_nvme
00:07:24.534  ************************************
00:07:24.534   13:44:07 unittest.unittest_nvme -- common/autotest_common.sh@1129 -- # unittest_nvme
00:07:24.534   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut
00:07:24.534  
00:07:24.534  
00:07:24.534       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.534       http://cunit.sourceforge.net/
00:07:24.534  
00:07:24.534  
00:07:24.534  Suite: nvme
00:07:24.534    Test: test_opc_data_transfer ...passed
00:07:24.534    Test: test_spdk_nvme_transport_id_parse_trtype ...passed
00:07:24.534    Test: test_spdk_nvme_transport_id_parse_adrfam ...passed
00:07:24.534    Test: test_trid_parse_and_compare ...[2024-12-11 13:44:07.271061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1237:parse_next_key: *ERROR*: Key without ':' or '=' separator
00:07:24.534  [2024-12-11 13:44:07.271446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1294:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:24.534  [2024-12-11 13:44:07.271503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1249:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31
00:07:24.534  [2024-12-11 13:44:07.271563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1294:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:24.534  [2024-12-11 13:44:07.271611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1260:parse_next_key: *ERROR*: Key without value
00:07:24.534  [2024-12-11 13:44:07.271878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1294:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:24.534  passed
00:07:24.534    Test: test_trid_trtype_str ...passed
00:07:24.534    Test: test_trid_adrfam_str ...passed
00:07:24.534    Test: test_nvme_ctrlr_probe ...[2024-12-11 13:44:07.272319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 662:nvme_ctrlr_probe: *ERROR*: NVMe controller for SSD:  is being destructed
00:07:24.534  passed
00:07:24.534    Test: test_spdk_nvme_probe_ext ...[2024-12-11 13:44:07.272423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:24.534  [2024-12-11 13:44:07.272566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:24.534  [2024-12-11 13:44:07.272616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:07:24.534  [2024-12-11 13:44:07.272834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 833:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available
00:07:24.534  passed
00:07:24.534    Test: test_spdk_nvme_connect ...[2024-12-11 13:44:07.272916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:07:24.534  [2024-12-11 13:44:07.273097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1048:spdk_nvme_connect: *ERROR*: No transport ID specified
00:07:24.534  passed
00:07:24.534    Test: test_nvme_ctrlr_probe_internal ...[2024-12-11 13:44:07.273860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:24.534  [2024-12-11 13:44:07.274166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:24.534  passed
00:07:24.534    Test: test_nvme_init_controllers ...[2024-12-11 13:44:07.274239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:07:24.534  [2024-12-11 13:44:07.274390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 
00:07:24.534  passed
00:07:24.534    Test: test_nvme_driver_init ...[2024-12-11 13:44:07.274544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 576:nvme_driver_init: *ERROR*: primary process failed to reserve memory
00:07:24.534  [2024-12-11 13:44:07.274622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:24.792  [2024-12-11 13:44:07.383987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 594:nvme_driver_init: *ERROR*: timeout waiting for primary process to init
00:07:24.792  [2024-12-11 13:44:07.384210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 616:nvme_driver_init: *ERROR*: failed to initialize mutex
00:07:24.792  passed
00:07:24.792    Test: test_spdk_nvme_detach ...passed
00:07:24.792    Test: test_nvme_completion_poll_cb ...passed
00:07:24.792    Test: test_nvme_user_copy_cmd_complete ...passed
00:07:24.792    Test: test_nvme_allocate_request_null ...passed
00:07:24.792    Test: test_nvme_allocate_request ...passed
00:07:24.792    Test: test_nvme_free_request ...passed
00:07:24.792    Test: test_nvme_allocate_request_user_copy ...passed
00:07:24.792    Test: test_nvme_robust_mutex_init_shared ...passed
00:07:24.792    Test: test_nvme_request_check_timeout ...passed
00:07:24.792    Test: test_nvme_wait_for_completion ...passed
00:07:24.792    Test: test_spdk_nvme_parse_func ...passed
00:07:24.792    Test: test_spdk_nvme_detach_async ...passed
00:07:24.792    Test: test_nvme_parse_addr ...[2024-12-11 13:44:07.385816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1711:nvme_parse_addr: *ERROR*: getaddrinfo failed: Name or service not known (-2)
00:07:24.792  passed
00:07:24.792  
00:07:24.792  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.792                suites      1      1    n/a      0        0
00:07:24.792                 tests     25     25     25      0        0
00:07:24.792               asserts    332    332    332      0      n/a
00:07:24.792  
00:07:24.792  Elapsed time =    0.008 seconds
00:07:24.792   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut
00:07:24.792  
00:07:24.792  
00:07:24.792       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.792       http://cunit.sourceforge.net/
00:07:24.792  
00:07:24.792  
00:07:24.792  Suite: nvme_ctrlr
00:07:24.792    Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-12-11 13:44:07.421837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  passed
00:07:24.792    Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-12-11 13:44:07.423554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  passed
00:07:24.792    Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-12-11 13:44:07.424800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  passed
00:07:24.792    Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-12-11 13:44:07.426005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  passed
00:07:24.792    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-12-11 13:44:07.427245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  [2024-12-11 13:44:07.428375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-11 13:44:07.429522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-11 13:44:07.430663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:24.792    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-12-11 13:44:07.432981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  [2024-12-11 13:44:07.435210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-11 13:44:07.436387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:24.792    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-12-11 13:44:07.438832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  [2024-12-11 13:44:07.440114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-11 13:44:07.442532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:24.792    Test: test_nvme_ctrlr_init_delay ...[2024-12-11 13:44:07.445326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  passed
00:07:24.792    Test: test_alloc_io_qpair_rr_1 ...[2024-12-11 13:44:07.446841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  [2024-12-11 13:44:07.447300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:07:24.792  [2024-12-11 13:44:07.447503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:24.792  [2024-12-11 13:44:07.447581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:24.792  passed
00:07:24.792    Test: test_ctrlr_get_default_ctrlr_opts ...passed
00:07:24.792    Test: test_ctrlr_get_default_io_qpair_opts ...[2024-12-11 13:44:07.447693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:24.792  passed
00:07:24.792    Test: test_alloc_io_qpair_wrr_1 ...[2024-12-11 13:44:07.447928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.792  passed
00:07:24.792    Test: test_alloc_io_qpair_wrr_2 ...[2024-12-11 13:44:07.448307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.793  [2024-12-11 13:44:07.448575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:07:24.793  passed
00:07:24.793    Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-12-11 13:44:07.449082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5051:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_update_firmware invalid size!
00:07:24.793  [2024-12-11 13:44:07.449277] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:07:24.793  [2024-12-11 13:44:07.449441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5128:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] nvme_ctrlr_cmd_fw_commit failed!
00:07:24.793  [2024-12-11 13:44:07.449593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:07:24.793  passed
00:07:24.793    Test: test_nvme_ctrlr_fail ...[2024-12-11 13:44:07.449754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [, 0] in failed state.
00:07:24.793  passed
00:07:24.793    Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed
00:07:24.793    Test: test_nvme_ctrlr_set_supported_features ...passed
00:07:24.793    Test: test_nvme_ctrlr_set_host_feature ...[2024-12-11 13:44:07.449982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:24.793  passed
00:07:24.793    Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed
00:07:24.793    Test: test_nvme_ctrlr_test_active_ns ...[2024-12-11 13:44:07.451586] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_test_active_ns_error_case ...passed
00:07:25.051    Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed
00:07:25.051    Test: test_spdk_nvme_ctrlr_set_trid ...passed
00:07:25.051    Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-12-11 13:44:07.710391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_init_set_num_queues ...[2024-12-11 13:44:07.717224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-12-11 13:44:07.718398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  [2024-12-11 13:44:07.718457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3039:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [, 0] Keep alive timeout Get Feature failed: SC 6 SCT 0
00:07:25.051  passed
00:07:25.051    Test: test_alloc_io_qpair_fail ...[2024-12-11 13:44:07.719595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_add_remove_process ...[2024-12-11 13:44:07.719672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 505:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [, 0] nvme_transport_ctrlr_connect_io_qpair() failed
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_set_arbitration_feature ...passed
00:07:25.051    Test: test_nvme_ctrlr_set_state ...passed
00:07:25.051    Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-12-11 13:44:07.719866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1554:_nvme_ctrlr_set_state: *ERROR*: [, 0] Specified timeout would cause integer overflow. Defaulting to no timeout.
00:07:25.051  [2024-12-11 13:44:07.719953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-12-11 13:44:07.744718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_ns_mgmt ...[2024-12-11 13:44:07.786839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_reset ...[2024-12-11 13:44:07.788326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_aer_callback ...[2024-12-11 13:44:07.788653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_ns_attr_changed ...[2024-12-11 13:44:07.790033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed
00:07:25.051    Test: test_nvme_ctrlr_set_supported_log_pages ...passed
00:07:25.051    Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-12-11 13:44:07.791771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_parse_ana_log_page ...passed
00:07:25.051    Test: test_nvme_ctrlr_ana_resize ...[2024-12-11 13:44:07.793137] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_ctrlr_get_memory_domains ...passed
00:07:25.051    Test: test_nvme_transport_ctrlr_ready ...passed[2024-12-11 13:44:07.794621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4194:nvme_ctrlr_process_init: *ERROR*: [, 0] Transport controller ready step failed: rc -1
00:07:25.051  [2024-12-11 13:44:07.794689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4246:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr operation failed with error: -1, ctrlr state: 53 (error)
00:07:25.051  
00:07:25.051    Test: test_nvme_ctrlr_disable ...[2024-12-11 13:44:07.794734] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:25.051  passed
00:07:25.051    Test: test_nvme_numa_id ...passed
00:07:25.051  
00:07:25.051  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.051                suites      1      1    n/a      0        0
00:07:25.051                 tests     45     45     45      0        0
00:07:25.051               asserts  10448  10448  10448      0      n/a
00:07:25.051  
00:07:25.051  Elapsed time =    0.333 seconds
00:07:25.310   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut
00:07:25.310  
00:07:25.310  
00:07:25.310       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.310       http://cunit.sourceforge.net/
00:07:25.310  
00:07:25.310  
00:07:25.310  Suite: nvme_ctrlr_cmd
00:07:25.310    Test: test_get_log_pages ...passed
00:07:25.310    Test: test_set_feature_cmd ...passed
00:07:25.310    Test: test_set_feature_ns_cmd ...passed
00:07:25.310    Test: test_get_feature_cmd ...passed
00:07:25.310    Test: test_get_feature_ns_cmd ...passed
00:07:25.310    Test: test_abort_cmd ...passed
00:07:25.310    Test: test_set_host_id_cmds ...[2024-12-11 13:44:07.844750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024
00:07:25.310  passed
00:07:25.310    Test: test_io_cmd_raw_no_payload_build ...passed
00:07:25.310    Test: test_io_raw_cmd ...passed
00:07:25.310    Test: test_io_raw_cmd_with_md ...passed
00:07:25.310    Test: test_namespace_attach ...passed
00:07:25.310    Test: test_namespace_detach ...passed
00:07:25.310    Test: test_namespace_create ...passed
00:07:25.310    Test: test_namespace_delete ...passed
00:07:25.310    Test: test_doorbell_buffer_config ...passed
00:07:25.310    Test: test_format_nvme ...passed
00:07:25.310    Test: test_fw_commit ...passed
00:07:25.310    Test: test_fw_image_download ...passed
00:07:25.310    Test: test_sanitize ...passed
00:07:25.310    Test: test_directive ...passed
00:07:25.310    Test: test_nvme_request_add_abort ...passed
00:07:25.310    Test: test_spdk_nvme_ctrlr_cmd_abort ...passed
00:07:25.310    Test: test_nvme_ctrlr_cmd_identify ...passed
00:07:25.310    Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed
00:07:25.310  
00:07:25.310  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.310                suites      1      1    n/a      0        0
00:07:25.310                 tests     24     24     24      0        0
00:07:25.310               asserts    198    198    198      0      n/a
00:07:25.310  
00:07:25.310  Elapsed time =    0.001 seconds
00:07:25.310   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut
00:07:25.310  
00:07:25.310  
00:07:25.310       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.311       http://cunit.sourceforge.net/
00:07:25.311  
00:07:25.311  
00:07:25.311  Suite: nvme_ctrlr_cmd
00:07:25.311    Test: test_geometry_cmd ...passed
00:07:25.311    Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed
00:07:25.311  
00:07:25.311  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.311                suites      1      1    n/a      0        0
00:07:25.311                 tests      2      2      2      0        0
00:07:25.311               asserts      7      7      7      0      n/a
00:07:25.311  
00:07:25.311  Elapsed time =    0.000 seconds
00:07:25.311   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut
00:07:25.311  
00:07:25.311  
00:07:25.311       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.311       http://cunit.sourceforge.net/
00:07:25.311  
00:07:25.311  
00:07:25.311  Suite: nvme
00:07:25.311    Test: test_nvme_ns_construct ...passed
00:07:25.311    Test: test_nvme_ns_uuid ...passed
00:07:25.311    Test: test_nvme_ns_csi ...passed
00:07:25.311    Test: test_nvme_ns_data ...passed
00:07:25.311    Test: test_nvme_ns_set_identify_data ...passed
00:07:25.311    Test: test_spdk_nvme_ns_get_values ...passed
00:07:25.311    Test: test_spdk_nvme_ns_is_active ...passed
00:07:25.311    Test: spdk_nvme_ns_supports ...passed
00:07:25.311    Test: test_nvme_ns_has_supported_iocs_specific_data ...passed
00:07:25.311    Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed
00:07:25.311    Test: test_nvme_ctrlr_identify_id_desc ...passed
00:07:25.311    Test: test_nvme_ns_find_id_desc ...passed
00:07:25.311  
00:07:25.311  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.311                suites      1      1    n/a      0        0
00:07:25.311                 tests     12     12     12      0        0
00:07:25.311               asserts     95     95     95      0      n/a
00:07:25.311  
00:07:25.311  Elapsed time =    0.001 seconds
00:07:25.311   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut
00:07:25.311  
00:07:25.311  
00:07:25.311       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.311       http://cunit.sourceforge.net/
00:07:25.311  
00:07:25.311  
00:07:25.311  Suite: nvme_ns_cmd
00:07:25.311    Test: split_test ...passed
00:07:25.311    Test: split_test2 ...passed
00:07:25.311    Test: split_test3 ...passed
00:07:25.311    Test: split_test4 ...passed
00:07:25.311    Test: test_nvme_ns_cmd_flush ...passed
00:07:25.311    Test: test_nvme_ns_cmd_dataset_management ...passed
00:07:25.311    Test: test_nvme_ns_cmd_copy ...passed
00:07:25.311    Test: test_io_flags ...[2024-12-11 13:44:07.935704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc
00:07:25.311  passed
00:07:25.311    Test: test_nvme_ns_cmd_write_zeroes ...passed
00:07:25.311    Test: test_nvme_ns_cmd_write_uncorrectable ...passed
00:07:25.311    Test: test_nvme_ns_cmd_reservation_register ...passed
00:07:25.311    Test: test_nvme_ns_cmd_reservation_release ...passed
00:07:25.311    Test: test_nvme_ns_cmd_reservation_acquire ...passed
00:07:25.311    Test: test_nvme_ns_cmd_reservation_report ...passed
00:07:25.311    Test: test_cmd_child_request ...passed
00:07:25.311    Test: test_nvme_ns_cmd_readv ...passed
00:07:25.311    Test: test_nvme_ns_cmd_readv_sgl ...[2024-12-11 13:44:07.937344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 390:_nvme_ns_cmd_split_request_sgl: *ERROR*: Unable to send I/O. Would require more than the supported number of SGL Elements.passed
00:07:25.311    Test: test_nvme_ns_cmd_read_with_md ...passed
00:07:25.311    Test: test_nvme_ns_cmd_writev ...[2024-12-11 13:44:07.937836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512
00:07:25.311  passed
00:07:25.311    Test: test_nvme_ns_cmd_write_with_md ...passed
00:07:25.311    Test: test_nvme_ns_cmd_zone_append_with_md ...passed
00:07:25.311    Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed
00:07:25.311    Test: test_nvme_ns_cmd_comparev ...passed
00:07:25.311    Test: test_nvme_ns_cmd_compare_and_write ...passed
00:07:25.311    Test: test_nvme_ns_cmd_compare_with_md ...passed
00:07:25.311    Test: test_nvme_ns_cmd_comparev_with_md ...passed
00:07:25.311    Test: test_nvme_ns_cmd_setup_request ...passed
00:07:25.311    Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed
00:07:25.311    Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-12-11 13:44:07.940890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:25.311  passed
00:07:25.311    Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-12-11 13:44:07.941094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:25.311  passed
00:07:25.311    Test: test_nvme_ns_cmd_verify ...passed
00:07:25.311    Test: test_nvme_ns_cmd_io_mgmt_send ...passed
00:07:25.311    Test: test_nvme_ns_cmd_io_mgmt_recv ...passed
00:07:25.311  
00:07:25.311  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.311                suites      1      1    n/a      0        0
00:07:25.311                 tests     33     33     33      0        0
00:07:25.311               asserts    569    569    569      0      n/a
00:07:25.311  
00:07:25.311  Elapsed time =    0.008 seconds
00:07:25.311   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut
00:07:25.311  
00:07:25.311  
00:07:25.311       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.311       http://cunit.sourceforge.net/
00:07:25.311  
00:07:25.311  
00:07:25.311  Suite: nvme_ns_cmd
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_read ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_write ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed
00:07:25.311    Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed
00:07:25.311  
00:07:25.311  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.311                suites      1      1    n/a      0        0
00:07:25.311                 tests     12     12     12      0        0
00:07:25.311               asserts    123    123    123      0      n/a
00:07:25.311  
00:07:25.311  Elapsed time =    0.002 seconds
00:07:25.311   13:44:07 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut
00:07:25.311  
00:07:25.311  
00:07:25.311       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.311       http://cunit.sourceforge.net/
00:07:25.311  
00:07:25.311  
00:07:25.311  Suite: nvme_qpair
00:07:25.311    Test: test3 ...passed
00:07:25.311    Test: test_ctrlr_failed ...passed
00:07:25.311    Test: struct_packing ...passed
00:07:25.311    Test: test_nvme_qpair_process_completions ...[2024-12-11 13:44:08.013222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:25.311  [2024-12-11 13:44:08.013503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:25.311  [2024-12-11 13:44:08.013588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 0
00:07:25.311  [2024-12-11 13:44:08.013639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 1
00:07:25.311  passed
00:07:25.311    Test: test_nvme_completion_is_retry ...passed
00:07:25.311    Test: test_get_status_string ...passed
00:07:25.311    Test: test_nvme_qpair_add_cmd_error_injection ...passed
00:07:25.311    Test: test_nvme_qpair_submit_request ...passed
00:07:25.311    Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed
00:07:25.311    Test: test_nvme_qpair_manual_complete_request ...passed
00:07:25.311    Test: test_nvme_qpair_init_deinit ...passed
00:07:25.311    Test: test_nvme_get_sgl_print_info ...passed
00:07:25.311  
00:07:25.311  [2024-12-11 13:44:08.014100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:25.311  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.311                suites      1      1    n/a      0        0
00:07:25.311                 tests     12     12     12      0        0
00:07:25.311               asserts    154    154    154      0      n/a
00:07:25.311  
00:07:25.311  Elapsed time =    0.001 seconds
00:07:25.311   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut
00:07:25.311  
00:07:25.311  
00:07:25.311       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.311       http://cunit.sourceforge.net/
00:07:25.311  
00:07:25.311  
00:07:25.311  Suite: nvme_pcie
00:07:25.311    Test: test_prp_list_append ...[2024-12-11 13:44:08.054181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:25.311  [2024-12-11 13:44:08.054516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1271:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800)
00:07:25.311  [2024-12-11 13:44:08.054585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1261:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed
00:07:25.311  [2024-12-11 13:44:08.054909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:25.311  passed
00:07:25.311    Test: test_nvme_pcie_hotplug_monitor ...[2024-12-11 13:44:08.055071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:25.311  passed
00:07:25.311    Test: test_shadow_doorbell_update ...passed
00:07:25.311    Test: test_build_contig_hw_sgl_request ...passed
00:07:25.311    Test: test_nvme_pcie_qpair_build_metadata ...passed
00:07:25.311    Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed
00:07:25.311    Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed
00:07:25.311    Test: test_nvme_pcie_qpair_build_contig_request ...passed
00:07:25.311    Test: test_nvme_pcie_ctrlr_regs_get_set ...[2024-12-11 13:44:08.055565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:25.311  passed
00:07:25.311    Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed
00:07:25.311    Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-12-11 13:44:08.055856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues.
00:07:25.311  passed
00:07:25.311    Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed
00:07:25.311    Test: test_nvme_pcie_ctrlr_config_pmr ...passed
00:07:25.311    Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-12-11 13:44:08.055982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value
00:07:25.312  [2024-12-11 13:44:08.056084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled
00:07:25.312  [2024-12-11 13:44:08.056179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller
00:07:25.312  passed
00:07:25.312  
00:07:25.312  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.312                suites      1      1    n/a      0        0
00:07:25.312                 tests     14     14     14      0        0
00:07:25.312               asserts    235    235    235      0      n/a
00:07:25.312  
00:07:25.312  Elapsed time =    0.002 seconds
00:07:25.312   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut
00:07:25.570  
00:07:25.570  
00:07:25.570       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.570       http://cunit.sourceforge.net/
00:07:25.570  
00:07:25.570  
00:07:25.570  Suite: nvme_ns_cmd
00:07:25.570    Test: nvme_poll_group_create_test ...passed
00:07:25.570    Test: nvme_poll_group_add_remove_test ...[2024-12-11 13:44:08.090901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_poll_group.c: 216:spdk_nvme_poll_group_add: *ERROR*: Queue pair without interrupts cannot be added to poll group
00:07:25.570  passed
00:07:25.570    Test: nvme_poll_group_process_completions ...passed
00:07:25.570    Test: nvme_poll_group_destroy_test ...passed
00:07:25.570    Test: nvme_poll_group_get_free_stats ...passed
00:07:25.570  
00:07:25.570  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.570                suites      1      1    n/a      0        0
00:07:25.570                 tests      5      5      5      0        0
00:07:25.570               asserts    103    103    103      0      n/a
00:07:25.570  
00:07:25.570  Elapsed time =    0.001 seconds
00:07:25.570   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut
00:07:25.570  
00:07:25.570  
00:07:25.570       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.570       http://cunit.sourceforge.net/
00:07:25.570  
00:07:25.570  
00:07:25.570  Suite: nvme_quirks
00:07:25.570    Test: test_nvme_quirks_striping ...passed
00:07:25.570  
00:07:25.570  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.570                suites      1      1    n/a      0        0
00:07:25.570                 tests      1      1      1      0        0
00:07:25.570               asserts      5      5      5      0      n/a
00:07:25.570  
00:07:25.570  Elapsed time =    0.000 seconds
00:07:25.571   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut
00:07:25.571  
00:07:25.571  
00:07:25.571       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.571       http://cunit.sourceforge.net/
00:07:25.571  
00:07:25.571  
00:07:25.571  Suite: nvme_tcp
00:07:25.571    Test: test_nvme_tcp_pdu_set_data_buf ...passed
00:07:25.571    Test: test_nvme_tcp_build_iovs ...passed
00:07:25.571    Test: test_nvme_tcp_build_sgl_request ...[2024-12-11 13:44:08.161658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7dbe23a0d2d0, and the iovcnt=16, remaining_size=28672
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed
00:07:25.571    Test: test_nvme_tcp_build_iovs_with_md ...passed
00:07:25.571    Test: test_nvme_tcp_req_complete_safe ...passed
00:07:25.571    Test: test_nvme_tcp_req_get ...passed
00:07:25.571    Test: test_nvme_tcp_req_init ...passed
00:07:25.571    Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed
00:07:25.571    Test: test_nvme_tcp_qpair_write_pdu ...passed
00:07:25.571    Test: test_nvme_tcp_qpair_set_recv_state ...passed
00:07:25.571    Test: test_nvme_tcp_alloc_reqs ...[2024-12-11 13:44:08.162293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe23609020 is same with the state(7) to be set
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-12-11 13:44:08.162705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe23909080 is same with the state(6) to be set
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_pdu_ch_handle ...[2024-12-11 13:44:08.162767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1133:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7dbe2380a760
00:07:25.571  [2024-12-11 13:44:08.162799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1192:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0
00:07:25.571  [2024-12-11 13:44:08.162831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.162861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1143:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated
00:07:25.571  [2024-12-11 13:44:08.162892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.162914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:07:25.571  [2024-12-11 13:44:08.162943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.162991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.163021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.163054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_qpair_connect_sock ...[2024-12-11 13:44:08.163081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.163110] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2380a080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.163305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3
00:07:25.571  [2024-12-11 13:44:08.163362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_qpair_icreq_send ...passed
00:07:25.571    Test: test_nvme_tcp_c2h_payload_handle ...[2024-12-11 13:44:08.163714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_icresp_handle ...[2024-12-11 13:44:08.163829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7dbe2380b5c0): PDU Sequence Error
00:07:25.571  [2024-12-11 13:44:08.163879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1476:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1
00:07:25.571  [2024-12-11 13:44:08.163911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1483:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048
00:07:25.571  [2024-12-11 13:44:08.163935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2390b080 is same with the state(6) to be set
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_pdu_payload_handle ...[2024-12-11 13:44:08.163963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1492:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64
00:07:25.571  [2024-12-11 13:44:08.163994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2390b080 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.164023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe2390b080 is same with the state(0) to be set
00:07:25.571  [2024-12-11 13:44:08.164084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error inpassed
00:07:25.571    Test: test_nvme_tcp_capsule_resp_hdr_handle ...fo of pdu(0x7dbe2380c5c0): PDU Sequence Error
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-12-11 13:44:08.164162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1553:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7dbe2390d210
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-12-11 13:44:08.164327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 357:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7dbe23a294d0, errno=0, rc=0
00:07:25.571  [2024-12-11 13:44:08.164360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe23a294d0 is same with the state(6) to be set
00:07:25.571  [2024-12-11 13:44:08.164394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dbe23a294d0 is same with the state(6) to be set
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-12-11 13:44:08.164432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dbe23a294d0 (0): Success
00:07:25.571  [2024-12-11 13:44:08.164467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dbe23a294d0 (0): Success
00:07:25.571  [2024-12-11 13:44:08.287205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:25.571  [2024-12-11 13:44:08.287343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed
00:07:25.571    Test: test_nvme_tcp_poll_group_get_stats ...[2024-12-11 13:44:08.288051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_ctrlr_construct ...[2024-12-11 13:44:08.288132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:25.571  [2024-12-11 13:44:08.288526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:25.571  [2024-12-11 13:44:08.288583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:25.571  [2024-12-11 13:44:08.288733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254
00:07:25.571  [2024-12-11 13:44:08.288835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:25.571  [2024-12-11 13:44:08.289056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515000001980 with addr=192.168.1.78, port=23
00:07:25.571  passed
00:07:25.571    Test: test_nvme_tcp_qpair_submit_request ...[2024-12-11 13:44:08.289196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:25.571  [2024-12-11 13:44:08.289472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x514000000c40, and the iovcnt=1, remaining_size=1024
00:07:25.571  [2024-12-11 13:44:08.289549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 977:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed
00:07:25.571  passed
00:07:25.571  
00:07:25.571  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.571                suites      1      1    n/a      0        0
00:07:25.571                 tests     27     27     27      0        0
00:07:25.571               asserts    624    624    624      0      n/a
00:07:25.571  
00:07:25.571  Elapsed time =    0.128 seconds
00:07:25.571   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut
00:07:25.571  
00:07:25.571  
00:07:25.571       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.571       http://cunit.sourceforge.net/
00:07:25.571  
00:07:25.571  
00:07:25.571  Suite: nvme_transport
00:07:25.571    Test: test_nvme_get_transport ...passed
00:07:25.571    Test: test_nvme_transport_poll_group_connect_qpair ...passed
00:07:25.571    Test: test_nvme_transport_poll_group_disconnect_qpair ...passed
00:07:25.571    Test: test_nvme_transport_poll_group_add_remove ...passed
00:07:25.571    Test: test_ctrlr_get_memory_domains ...passed
00:07:25.571  
00:07:25.571  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.571                suites      1      1    n/a      0        0
00:07:25.571                 tests      5      5      5      0        0
00:07:25.571               asserts     28     28     28      0      n/a
00:07:25.571  
00:07:25.571  Elapsed time =    0.000 seconds
00:07:25.830   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut
00:07:25.830  
00:07:25.830  
00:07:25.830       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.830       http://cunit.sourceforge.net/
00:07:25.830  
00:07:25.830  
00:07:25.830  Suite: nvme_io_msg
00:07:25.830    Test: test_nvme_io_msg_send ...passed
00:07:25.830    Test: test_nvme_io_msg_process ...passed
00:07:25.830    Test: test_nvme_io_msg_ctrlr_register_unregister ...passed
00:07:25.830  
00:07:25.830  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.830                suites      1      1    n/a      0        0
00:07:25.830                 tests      3      3      3      0        0
00:07:25.830               asserts     56     56     56      0      n/a
00:07:25.830  
00:07:25.830  Elapsed time =    0.000 seconds
00:07:25.830   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut
00:07:25.830  
00:07:25.830  
00:07:25.830       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.830       http://cunit.sourceforge.net/
00:07:25.830  
00:07:25.830  
00:07:25.830  Suite: nvme_pcie_common
00:07:25.830    Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-12-11 13:44:08.418792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 112:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range!
00:07:25.830  passed
00:07:25.830    Test: test_nvme_pcie_qpair_construct_destroy ...passed
00:07:25.830    Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed
00:07:25.830    Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-12-11 13:44:08.420180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 541:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed!
00:07:25.830  [2024-12-11 13:44:08.420297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 494:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq!
00:07:25.830  passed
00:07:25.830    Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-12-11 13:44:08.420356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 588:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq
00:07:25.830  passed
00:07:25.830    Test: test_nvme_pcie_poll_group_get_stats ...[2024-12-11 13:44:08.421198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:25.830  passed
00:07:25.830  
00:07:25.830  [2024-12-11 13:44:08.421291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:25.830  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.830                suites      1      1    n/a      0        0
00:07:25.830                 tests      6      6      6      0        0
00:07:25.830               asserts    148    148    148      0      n/a
00:07:25.830  
00:07:25.830  Elapsed time =    0.003 seconds
00:07:25.830   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut
00:07:25.830  
00:07:25.830  
00:07:25.830       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.830       http://cunit.sourceforge.net/
00:07:25.830  
00:07:25.830  
00:07:25.830  Suite: nvme_fabric
00:07:25.830    Test: test_nvme_fabric_prop_set_cmd ...passed
00:07:25.830    Test: test_nvme_fabric_prop_get_cmd ...passed
00:07:25.830    Test: test_nvme_fabric_get_discovery_log_page ...passed
00:07:25.830    Test: test_nvme_fabric_discover_probe ...passed
00:07:25.830    Test: test_nvme_fabric_qpair_connect ...[2024-12-11 13:44:08.460957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1
00:07:25.830  passed
00:07:25.830  
00:07:25.830  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.830                suites      1      1    n/a      0        0
00:07:25.830                 tests      5      5      5      0        0
00:07:25.830               asserts     60     60     60      0      n/a
00:07:25.830  
00:07:25.830  Elapsed time =    0.001 seconds
00:07:25.830   13:44:08 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut
00:07:25.830  
00:07:25.830  
00:07:25.830       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.830       http://cunit.sourceforge.net/
00:07:25.830  
00:07:25.830  
00:07:25.830  Suite: nvme_opal
00:07:25.830    Test: test_opal_nvme_security_recv_send_done ...passed
00:07:25.830    Test: test_opal_add_short_atom_header ...[2024-12-11 13:44:08.493770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer.
00:07:25.830  passed
00:07:25.830  
00:07:25.830  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.830                suites      1      1    n/a      0        0
00:07:25.830                 tests      2      2      2      0        0
00:07:25.830               asserts     22     22     22      0      n/a
00:07:25.830  
00:07:25.830  Elapsed time =    0.000 seconds
00:07:25.830  
00:07:25.830  real	0m1.269s
00:07:25.830  user	0m0.608s
00:07:25.830  sys	0m0.519s
00:07:25.830   13:44:08 unittest.unittest_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.830   13:44:08 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:25.830  ************************************
00:07:25.830  END TEST unittest_nvme
00:07:25.830  ************************************
00:07:25.830   13:44:08 unittest -- unit/unittest.sh@231 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:25.830   13:44:08 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.830   13:44:08 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.830   13:44:08 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:25.830  ************************************
00:07:25.830  START TEST unittest_log
00:07:25.830  ************************************
00:07:25.830   13:44:08 unittest.unittest_log -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:25.830  
00:07:25.830  
00:07:25.830       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.830       http://cunit.sourceforge.net/
00:07:25.830  
00:07:25.830  
00:07:25.830  Suite: log
00:07:25.830    Test: log_test ...[2024-12-11 13:44:08.572194] log_ut.c:  56:log_test: *WARNING*: log warning unit test
00:07:25.830  [2024-12-11 13:44:08.572445] log_ut.c:  57:log_test: *DEBUG*: log test
00:07:25.830  log dump test:
00:07:25.830  00000000  6c 6f 67 20 64 75 6d 70                            log dump
00:07:25.830  spdk dump test:
00:07:25.830  00000000  73 70 64 6b 20 64 75 6d  70                        spdk dump
00:07:25.830  passed
00:07:25.830    Test: deprecation ...spdk dump test:
00:07:25.830  00000000  73 70 64 6b 20 64 75 6d  70 20 31 36 20 6d 6f 72  spdk dump 16 mor
00:07:25.830  00000010  65 20 63 68 61 72 73                              e chars
00:07:27.208  passed
00:07:27.208    Test: log_ext_test ...passed
00:07:27.208  
00:07:27.208  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.208                suites      1      1    n/a      0        0
00:07:27.208                 tests      3      3      3      0        0
00:07:27.208               asserts     77     77     77      0      n/a
00:07:27.208  
00:07:27.208  Elapsed time =    0.001 seconds
00:07:27.208  
00:07:27.208  real	0m1.031s
00:07:27.208  user	0m0.013s
00:07:27.208  sys	0m0.018s
00:07:27.208   13:44:09 unittest.unittest_log -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.208  ************************************
00:07:27.208  END TEST unittest_log
00:07:27.208  ************************************
00:07:27.208   13:44:09 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x
00:07:27.208   13:44:09 unittest -- unit/unittest.sh@232 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:27.208   13:44:09 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.208   13:44:09 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.208   13:44:09 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.208  ************************************
00:07:27.208  START TEST unittest_lvol
00:07:27.208  ************************************
00:07:27.208   13:44:09 unittest.unittest_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:27.208  
00:07:27.208  
00:07:27.208       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.208       http://cunit.sourceforge.net/
00:07:27.208  
00:07:27.208  
00:07:27.208  Suite: lvol
00:07:27.208    Test: lvs_init_unload_success ...[2024-12-11 13:44:09.656293] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 889:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store
00:07:27.208  passed
00:07:27.208    Test: lvs_init_destroy_success ...passed
00:07:27.208    Test: lvs_init_opts_success ...passed
00:07:27.208    Test: lvs_unload_lvs_is_null_fail ...passed
00:07:27.208    Test: lvs_names ...[2024-12-11 13:44:09.656860] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 959:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store
00:07:27.208  [2024-12-11 13:44:09.657092] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 879:spdk_lvs_unload: *ERROR*: Lvol store is NULL
00:07:27.208  [2024-12-11 13:44:09.657149] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 723:spdk_lvs_init: *ERROR*: Name must be between 1 and 63 characters
00:07:27.208  [2024-12-11 13:44:09.657215] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 723:spdk_lvs_init: *ERROR*: Name must be between 1 and 63 characters
00:07:27.208  [2024-12-11 13:44:09.657396] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 733:spdk_lvs_init: *ERROR*: lvolstore with name x already exists
00:07:27.208  passed
00:07:27.208    Test: lvol_create_destroy_success ...passed
00:07:27.208    Test: lvol_create_fail ...[2024-12-11 13:44:09.657998] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 691:spdk_lvs_init: *ERROR*: Blobstore device does not exist
00:07:27.208  [2024-12-11 13:44:09.658096] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1187:spdk_lvol_create: *ERROR*: lvol store does not exist
00:07:27.208  passed
00:07:27.208    Test: lvol_destroy_fail ...[2024-12-11 13:44:09.658401] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1023:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal
00:07:27.208  passed
00:07:27.208    Test: lvol_close ...passed
00:07:27.208    Test: lvol_resize ...[2024-12-11 13:44:09.658603] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1611:spdk_lvol_close: *ERROR*: lvol does not exist
00:07:27.208  [2024-12-11 13:44:09.658654] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 992:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol
00:07:27.208  passed
00:07:27.208    Test: lvol_set_read_only ...passed
00:07:27.209    Test: test_lvs_load ...[2024-12-11 13:44:09.659464] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value
00:07:27.209  [2024-12-11 13:44:09.659507] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options
00:07:27.209  passed
00:07:27.209    Test: lvols_load ...[2024-12-11 13:44:09.659759] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:27.209  passed
00:07:27.209    Test: lvol_open ...[2024-12-11 13:44:09.659886] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:27.209  passed
00:07:27.209    Test: lvol_snapshot ...passed
00:07:27.209    Test: lvol_snapshot_fail ...[2024-12-11 13:44:09.660624] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists
00:07:27.209  passed
00:07:27.209    Test: lvol_clone ...passed
00:07:27.209    Test: lvol_clone_fail ...[2024-12-11 13:44:09.661183] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists
00:07:27.209  passed
00:07:27.209    Test: lvol_iter_clones ...passed
00:07:27.209    Test: lvol_refcnt ...[2024-12-11 13:44:09.661602] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1569:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 2646a2b0-1be1-4ad2-b3fc-dba307954ed0 because it is still open
00:07:27.209  passed
00:07:27.209    Test: lvol_names ...[2024-12-11 13:44:09.661759] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1153:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:27.209  [2024-12-11 13:44:09.661829] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:27.209  passed
00:07:27.209    Test: lvol_create_thin_provisioned ...[2024-12-11 13:44:09.662008] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1166:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created
00:07:27.209  passed
00:07:27.209    Test: lvol_rename ...[2024-12-11 13:44:09.662375] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:27.209  [2024-12-11 13:44:09.662456] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1521:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs
00:07:27.209  passed
00:07:27.209    Test: lvs_rename ...[2024-12-11 13:44:09.662666] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 766:lvs_rename_cb: *ERROR*: Lvol store rename operation failed
00:07:27.209  passed
00:07:27.209    Test: lvol_inflate ...passed
00:07:27.209    Test: lvol_decouple_parent ...[2024-12-11 13:44:09.662824] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1655:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:27.209  passed
00:07:27.209    Test: lvol_get_xattr ...passed
00:07:27.209    Test: lvol_esnap_reload ...[2024-12-11 13:44:09.663013] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1655:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:27.209  passed
00:07:27.209    Test: lvol_esnap_create_bad_args ...[2024-12-11 13:44:09.663408] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1242:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist
00:07:27.209  passed
00:07:27.209    Test: lvol_esnap_create_delete ...[2024-12-11 13:44:09.663464] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1153:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:27.209  [2024-12-11 13:44:09.663508] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1255:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576
00:07:27.209  [2024-12-11 13:44:09.663579] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:27.209  [2024-12-11 13:44:09.663715] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists
00:07:27.209  passed
00:07:27.209    Test: lvol_esnap_load_esnaps ...passed
00:07:27.209    Test: lvol_esnap_missing ...[2024-12-11 13:44:09.663950] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1829:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context
00:07:27.209  [2024-12-11 13:44:09.664063] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:27.209  [2024-12-11 13:44:09.664106] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:27.209  passed
00:07:27.209    Test: lvol_esnap_hotplug ...
00:07:27.209  	lvol_esnap_hotplug scenario 0: PASS - one missing, happy path
00:07:27.209  	lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set
00:07:27.209  [2024-12-11 13:44:09.664875] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2059:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7e693a39-4c25-4d8e-a321-aa89fd0e9381: failed to create esnap bs_dev: error -12
00:07:27.209  	lvol_esnap_hotplug scenario 2: PASS - one missing, cb returns -ENOMEM
00:07:27.209  	lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path
00:07:27.209  	lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM
00:07:27.209  	lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM
00:07:27.209  [2024-12-11 13:44:09.665088] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2059:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9512aee9-a7a8-43d9-9055-775dab2b13e3: failed to create esnap bs_dev: error -12
00:07:27.209  [2024-12-11 13:44:09.665217] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2059:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6df57daf-1bd2-44d3-970c-2c6c4aa4341e: failed to create esnap bs_dev: error -12
00:07:27.209  	lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path
00:07:27.209  	lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing
00:07:27.209  	lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path
00:07:27.209  	lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing
00:07:27.209  	lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing
00:07:27.209  	lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing
00:07:27.209  	lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing
00:07:27.209  passed
00:07:27.209    Test: lvol_get_by ...passed
00:07:27.209    Test: lvol_shallow_copy ...[2024-12-11 13:44:09.666469] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2271:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:07:27.209  [2024-12-11 13:44:09.666509] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2278:spdk_lvol_shallow_copy: *ERROR*: lvol bf1263c1-a064-476d-bf34-331c84055821 shallow copy, ext_dev must not be NULL
00:07:27.209  passed
00:07:27.209    Test: lvol_set_parent ...[2024-12-11 13:44:09.666727] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2335:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL
00:07:27.209  [2024-12-11 13:44:09.666763] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2341:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL
00:07:27.209  passed
00:07:27.209    Test: lvol_set_external_parent ...passed
00:07:27.209  
00:07:27.209  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.209                suites      1      1    n/a      0        0
00:07:27.209                 tests     37     37     37      0        0
00:07:27.209               asserts   1505   1505   1505      0      n/a
00:07:27.209  
00:07:27.209  Elapsed time =    0.011 seconds
00:07:27.209  [2024-12-11 13:44:09.666990] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2390:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL
00:07:27.209  [2024-12-11 13:44:09.667028] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2396:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL
00:07:27.209  [2024-12-11 13:44:09.667059] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2403:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID
00:07:27.209  
00:07:27.209  real	0m0.056s
00:07:27.209  user	0m0.031s
00:07:27.209  sys	0m0.025s
00:07:27.209   13:44:09 unittest.unittest_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.209   13:44:09 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x
00:07:27.209  ************************************
00:07:27.209  END TEST unittest_lvol
00:07:27.209  ************************************
00:07:27.209   13:44:09 unittest -- unit/unittest.sh@233 -- # [[ y == y ]]
00:07:27.209   13:44:09 unittest -- unit/unittest.sh@234 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:27.209   13:44:09 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.209   13:44:09 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.209   13:44:09 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.209  ************************************
00:07:27.209  START TEST unittest_nvme_rdma
00:07:27.209  ************************************
00:07:27.209   13:44:09 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:27.209  
00:07:27.209  
00:07:27.209       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.209       http://cunit.sourceforge.net/
00:07:27.209  
00:07:27.209  
00:07:27.209  Suite: nvme_rdma
00:07:27.209    Test: test_nvme_rdma_build_sgl_request ...[2024-12-11 13:44:09.767256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1424:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34
00:07:27.209  passed
00:07:27.209    Test: test_nvme_rdma_build_sgl_inline_request ...passed
00:07:27.209    Test: test_nvme_rdma_build_contig_request ...passed
00:07:27.209    Test: test_nvme_rdma_build_contig_inline_request ...passed
00:07:27.209    Test: test_nvme_rdma_create_reqs ...passed
00:07:27.209    Test: test_nvme_rdma_create_rsps ...passed
00:07:27.209    Test: test_nvme_rdma_ctrlr_create_qpair ...passed
00:07:27.209    Test: test_nvme_rdma_poller_create ...passed
00:07:27.209    Test: test_nvme_rdma_qpair_process_cm_event ...passed
00:07:27.209    Test: test_nvme_rdma_ctrlr_construct ...passed
00:07:27.209    Test: test_nvme_rdma_req_put_and_get ...passed
00:07:27.209    Test: test_nvme_rdma_req_init ...passed
00:07:27.209    Test: test_nvme_rdma_validate_cm_event ...passed
00:07:27.209    Test: test_nvme_rdma_qpair_init ...[2024-12-11 13:44:09.767516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1611:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:27.209  [2024-12-11 13:44:09.767583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1667:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60)
00:07:27.209  [2024-12-11 13:44:09.767714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1563:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:27.209  [2024-12-11 13:44:09.767834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 955:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs
00:07:27.209  [2024-12-11 13:44:09.768162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 873:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls
00:07:27.209  [2024-12-11 13:44:09.768375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1996:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:27.209  [2024-12-11 13:44:09.768414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1996:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:27.209  [2024-12-11 13:44:09.768611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 479:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255]
00:07:27.209  [2024-12-11 13:44:09.768970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0)
00:07:27.209  [2024-12-11 13:44:09.769029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10)
00:07:27.209  passed
00:07:27.209    Test: test_nvme_rdma_qpair_submit_request ...passed
00:07:27.210    Test: test_rdma_ctrlr_get_memory_domains ...passed
00:07:27.210    Test: test_rdma_get_memory_translation ...passed
00:07:27.210    Test: test_get_rdma_qpair_from_wc ...passed
00:07:27.210    Test: test_nvme_rdma_ctrlr_get_max_sges ...passed
00:07:27.210    Test: test_nvme_rdma_poll_group_get_stats ...[2024-12-11 13:44:09.769198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1413:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0
00:07:27.210  [2024-12-11 13:44:09.769268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1424:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1
00:07:27.210  passed
00:07:27.210    Test: test_nvme_rdma_qpair_set_poller ...passed
00:07:27.210  
00:07:27.210  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.210                suites      1      1    n/a      0        0
00:07:27.210                 tests     21     21     21      0        0
00:07:27.210               asserts    395    395    395      0      n/a
00:07:27.210  
00:07:27.210  Elapsed time =    0.003 seconds
00:07:27.210  [2024-12-11 13:44:09.769362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3574:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:27.210  [2024-12-11 13:44:09.769408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3574:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:27.210  [2024-12-11 13:44:09.769594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3268:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:27.210  [2024-12-11 13:44:09.769656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3314:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef
00:07:27.210  [2024-12-11 13:44:09.769699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 676:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7b9865b13a00 on poll group 0x50c000000040
00:07:27.210  [2024-12-11 13:44:09.769743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3268:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:27.210  [2024-12-11 13:44:09.769783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3314:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil)
00:07:27.210  [2024-12-11 13:44:09.769817] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 676:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7b9865b13a00 on poll group 0x50c000000040
00:07:27.210  [2024-12-11 13:44:09.769893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 654:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:27.210  
00:07:27.210  real	0m0.042s
00:07:27.210  user	0m0.017s
00:07:27.210  sys	0m0.026s
00:07:27.210   13:44:09 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.210   13:44:09 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:27.210  ************************************
00:07:27.210  END TEST unittest_nvme_rdma
00:07:27.210  ************************************
00:07:27.210   13:44:09 unittest -- unit/unittest.sh@235 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:27.210   13:44:09 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.210   13:44:09 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.210   13:44:09 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.210  ************************************
00:07:27.210  START TEST unittest_nvmf_transport
00:07:27.210  ************************************
00:07:27.210   13:44:09 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:27.210  
00:07:27.210  
00:07:27.210       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.210       http://cunit.sourceforge.net/
00:07:27.210  
00:07:27.210  
00:07:27.210  Suite: nvmf
00:07:27.210    Test: test_spdk_nvmf_transport_create ...[2024-12-11 13:44:09.866107] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable.
00:07:27.210  [2024-12-11 13:44:09.866361] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0
00:07:27.210  [2024-12-11 13:44:09.866436] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536
00:07:27.210  passed
00:07:27.210    Test: test_nvmf_transport_poll_group_create ...[2024-12-11 13:44:09.866519] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB
00:07:27.210  passed
00:07:27.210    Test: test_spdk_nvmf_transport_opts_init ...[2024-12-11 13:44:09.866898] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 834:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable.
00:07:27.210  passed
00:07:27.210    Test: test_spdk_nvmf_transport_listen_ext ...[2024-12-11 13:44:09.866937] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 839:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL
00:07:27.210  [2024-12-11 13:44:09.866971] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 844:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value
00:07:27.210  passed
00:07:27.210  
00:07:27.210  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.210                suites      1      1    n/a      0        0
00:07:27.210                 tests      4      4      4      0        0
00:07:27.210               asserts     49     49     49      0      n/a
00:07:27.210  
00:07:27.210  Elapsed time =    0.001 seconds
00:07:27.210  
00:07:27.210  real	0m0.052s
00:07:27.210  user	0m0.023s
00:07:27.210  sys	0m0.030s
00:07:27.210   13:44:09 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.210   13:44:09 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x
00:07:27.210  ************************************
00:07:27.210  END TEST unittest_nvmf_transport
00:07:27.210  ************************************
00:07:27.210   13:44:09 unittest -- unit/unittest.sh@236 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:27.210   13:44:09 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.210   13:44:09 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.210   13:44:09 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.210  ************************************
00:07:27.210  START TEST unittest_rdma
00:07:27.210  ************************************
00:07:27.210   13:44:09 unittest.unittest_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:27.210  
00:07:27.210  
00:07:27.210       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.210       http://cunit.sourceforge.net/
00:07:27.210  
00:07:27.210  
00:07:27.210  Suite: rdma_common
00:07:27.210    Test: test_spdk_rdma_pd ...[2024-12-11 13:44:09.952541] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:07:27.210  [2024-12-11 13:44:09.952977] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:07:27.210  passed
00:07:27.210  
00:07:27.210  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.210                suites      1      1    n/a      0        0
00:07:27.210                 tests      1      1      1      0        0
00:07:27.210               asserts     31     31     31      0      n/a
00:07:27.210  
00:07:27.210  Elapsed time =    0.001 seconds
00:07:27.210  
00:07:27.210  real	0m0.034s
00:07:27.210  user	0m0.014s
00:07:27.210  sys	0m0.020s
00:07:27.210   13:44:09 unittest.unittest_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.210   13:44:09 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:27.210  ************************************
00:07:27.210  END TEST unittest_rdma
00:07:27.210  ************************************
00:07:27.469   13:44:10 unittest -- unit/unittest.sh@237 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:27.469   13:44:10 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.469   13:44:10 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.469   13:44:10 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.469  ************************************
00:07:27.469  START TEST unittest_nvmf_rdma
00:07:27.469  ************************************
00:07:27.469   13:44:10 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:27.469  
00:07:27.469  
00:07:27.469       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.469       http://cunit.sourceforge.net/
00:07:27.469  
00:07:27.469  
00:07:27.469  Suite: nvmf
00:07:27.469    Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-12-11 13:44:10.040760] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000
00:07:27.469  [2024-12-11 13:44:10.040999] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0
00:07:27.469  [2024-12-11 13:44:10.041037] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000
00:07:27.469  passed
00:07:27.469    Test: test_spdk_nvmf_rdma_request_process ...passed
00:07:27.469    Test: test_nvmf_rdma_get_optimal_poll_group ...passed
00:07:27.469    Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed
00:07:27.469    Test: test_nvmf_rdma_opts_init ...passed
00:07:27.469    Test: test_nvmf_rdma_request_free_data ...passed
00:07:27.469    Test: test_nvmf_rdma_resources_create ...passed
00:07:27.469    Test: test_nvmf_rdma_qpair_compare ...passed
00:07:27.469    Test: test_nvmf_rdma_resize_cq ...[2024-12-11 13:44:10.043714] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0
00:07:27.469  Using CQ of insufficient size may lead to CQ overrun
00:07:27.469  passed
00:07:27.469  
00:07:27.469  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.469                suites      1      1    n/a      0        0
00:07:27.469                 tests      9      9      9      0        0
00:07:27.469               asserts    579    579    579      0      n/a
00:07:27.469  
00:07:27.469  Elapsed time =    0.003 seconds
00:07:27.469  [2024-12-11 13:44:10.043774] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3)
00:07:27.469  [2024-12-11 13:44:10.043814] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 968:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:27.469  
00:07:27.469  real	0m0.046s
00:07:27.469  user	0m0.023s
00:07:27.469  sys	0m0.023s
00:07:27.469   13:44:10 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.469   13:44:10 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:27.469  ************************************
00:07:27.469  END TEST unittest_nvmf_rdma
00:07:27.469  ************************************
00:07:27.469   13:44:10 unittest -- unit/unittest.sh@240 -- # [[ y == y ]]
00:07:27.469   13:44:10 unittest -- unit/unittest.sh@241 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:27.469   13:44:10 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.469   13:44:10 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.469   13:44:10 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.469  ************************************
00:07:27.469  START TEST unittest_nvme_cuse
00:07:27.469  ************************************
00:07:27.469   13:44:10 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:27.469  
00:07:27.469  
00:07:27.469       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.469       http://cunit.sourceforge.net/
00:07:27.469  
00:07:27.469  
00:07:27.469  Suite: nvme_cuse
00:07:27.469    Test: test_cuse_nvme_submit_io_read_write ...passed
00:07:27.469    Test: test_cuse_nvme_submit_io_read_write_with_md ...passed
00:07:27.469    Test: test_cuse_nvme_submit_passthru_cmd ...passed
00:07:27.469    Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed
00:07:27.469    Test: test_nvme_cuse_get_cuse_ns_device ...passed
00:07:27.469    Test: test_cuse_nvme_submit_io ...[2024-12-11 13:44:10.128017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid
00:07:27.469  passed
00:07:27.469    Test: test_cuse_nvme_reset ...passed
00:07:27.469    Test: test_nvme_cuse_stop ...[2024-12-11 13:44:10.128242] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported
00:07:28.036  passed
00:07:28.036    Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed
00:07:28.036  
00:07:28.036  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.036                suites      1      1    n/a      0        0
00:07:28.036                 tests      9      9      9      0        0
00:07:28.036               asserts    118    118    118      0      n/a
00:07:28.036  
00:07:28.036  Elapsed time =    0.503 seconds
00:07:28.036  
00:07:28.036  real	0m0.533s
00:07:28.036  user	0m0.233s
00:07:28.036  sys	0m0.300s
00:07:28.036   13:44:10 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.036  ************************************
00:07:28.036  END TEST unittest_nvme_cuse
00:07:28.036   13:44:10 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x
00:07:28.036  ************************************
00:07:28.036   13:44:10 unittest -- unit/unittest.sh@244 -- # run_test unittest_nvmf unittest_nvmf
00:07:28.036   13:44:10 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.036   13:44:10 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.036   13:44:10 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.036  ************************************
00:07:28.036  START TEST unittest_nvmf
00:07:28.036  ************************************
00:07:28.036   13:44:10 unittest.unittest_nvmf -- common/autotest_common.sh@1129 -- # unittest_nvmf
00:07:28.036   13:44:10 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut
00:07:28.036  
00:07:28.036  
00:07:28.036       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.036       http://cunit.sourceforge.net/
00:07:28.036  
00:07:28.036  
00:07:28.036  Suite: nvmf
00:07:28.036    Test: test_get_log_page ...passed
00:07:28.036    Test: test_process_fabrics_cmd ...[2024-12-11 13:44:10.710826] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2
00:07:28.036  passed
00:07:28.036    Test: test_connect ...[2024-12-11 13:44:10.711048] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4890:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT
00:07:28.036  [2024-12-11 13:44:10.711753] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1016:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small
00:07:28.036  [2024-12-11 13:44:10.711807] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 878:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234
00:07:28.036  [2024-12-11 13:44:10.711837] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1055:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated
00:07:28.036  [2024-12-11 13:44:10.711872] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1'
00:07:28.036  [2024-12-11 13:44:10.711900] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 889:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0
00:07:28.036  [2024-12-11 13:44:10.711940] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 896:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31)
00:07:28.036  [2024-12-11 13:44:10.711975] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 902:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63)
00:07:28.036  [2024-12-11 13:44:10.712005] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 930:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234).
00:07:28.036  [2024-12-11 13:44:10.712095] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff
00:07:28.036  [2024-12-11 13:44:10.712159] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 679:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller
00:07:28.036  [2024-12-11 13:44:10.712422] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 685:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled
00:07:28.036  [2024-12-11 13:44:10.712484] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 691:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3
00:07:28.036  [2024-12-11 13:44:10.712555] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 698:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3
00:07:28.036  [2024-12-11 13:44:10.712617] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 722:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2
00:07:28.036  [2024-12-11 13:44:10.712739] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0)
00:07:28.036  passed
00:07:28.036    Test: test_get_ns_id_desc_list ...[2024-12-11 13:44:10.712874] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 809:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil))
00:07:28.036  [2024-12-11 13:44:10.712920] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 809:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil))
00:07:28.036  passed
00:07:28.036    Test: test_identify_ns ...[2024-12-11 13:44:10.713214] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:28.036  [2024-12-11 13:44:10.713465] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4
00:07:28.036  [2024-12-11 13:44:10.713568] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:07:28.036  passed
00:07:28.036    Test: test_identify_ns_iocs_specific ...[2024-12-11 13:44:10.713723] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:28.036  [2024-12-11 13:44:10.713942] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:28.036  passed
00:07:28.036    Test: test_reservation_write_exclusive ...passed
00:07:28.036    Test: test_reservation_exclusive_access ...passed
00:07:28.037    Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed
00:07:28.037    Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed
00:07:28.037    Test: test_reservation_notification_log_page ...passed
00:07:28.037    Test: test_get_dif_ctx ...passed
00:07:28.037    Test: test_set_get_features ...passed
00:07:28.037    Test: test_identify_ctrlr ...[2024-12-11 13:44:10.714495] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1652:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:28.037  [2024-12-11 13:44:10.714537] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1652:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:28.037  [2024-12-11 13:44:10.714560] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1663:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3
00:07:28.037  [2024-12-11 13:44:10.714592] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1739:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit
00:07:28.037  passed
00:07:28.037    Test: test_identify_ctrlr_iocs_specific ...passed
00:07:28.037    Test: test_custom_admin_cmd ...passed
00:07:28.037    Test: test_fused_compare_and_write ...passed
00:07:28.037    Test: test_multi_async_event_reqs ...passed
00:07:28.037    Test: test_get_ana_log_page_one_ns_per_anagrp ...passed
00:07:28.037    Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed
00:07:28.037    Test: test_multi_async_events ...passed
00:07:28.037    Test: test_rae ...passed
00:07:28.037    Test: test_nvmf_ctrlr_create_destruct ...passed
00:07:28.037    Test: test_nvmf_ctrlr_use_zcopy ...passed
00:07:28.037    Test: test_spdk_nvmf_request_zcopy_start ...passed
00:07:28.037    Test: test_zcopy_read ...passed
00:07:28.037    Test: test_zcopy_write ...passed
00:07:28.037    Test: test_nvmf_property_set ...passed
00:07:28.037    Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed
00:07:28.037    Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed
00:07:28.037    Test: test_nvmf_ctrlr_ns_attachment ...passed
00:07:28.037    Test: test_nvmf_check_qpair_active ...passed
00:07:28.037  
00:07:28.037  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.037                suites      1      1    n/a      0        0
00:07:28.037                 tests     32     32     32      0        0
00:07:28.037               asserts    996    996    996      0      n/a
00:07:28.037  
00:07:28.037  Elapsed time =    0.006 seconds
00:07:28.037  [2024-12-11 13:44:10.715072] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4398:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations
00:07:28.037  [2024-12-11 13:44:10.715109] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4387:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:28.037  [2024-12-11 13:44:10.715139] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4405:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:28.037  [2024-12-11 13:44:10.715692] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4890:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT
00:07:28.037  [2024-12-11 13:44:10.715736] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4916:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4
00:07:28.037  [2024-12-11 13:44:10.715932] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1950:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:28.037  [2024-12-11 13:44:10.715957] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1950:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:28.037  [2024-12-11 13:44:10.716006] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1974:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0
00:07:28.037  [2024-12-11 13:44:10.716029] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1980:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0
00:07:28.037  [2024-12-11 13:44:10.716055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1992:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:28.037  [2024-12-11 13:44:10.716078] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1992:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:28.037  [2024-12-11 13:44:10.716270] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4890:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT
00:07:28.037  [2024-12-11 13:44:10.716304] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4904:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication
00:07:28.037  [2024-12-11 13:44:10.716335] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4916:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0
00:07:28.037  [2024-12-11 13:44:10.716360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4916:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4
00:07:28.037  [2024-12-11 13:44:10.716373] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4916:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5
00:07:28.037   13:44:10 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut
00:07:28.037  
00:07:28.037  
00:07:28.037       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.037       http://cunit.sourceforge.net/
00:07:28.037  
00:07:28.037  
00:07:28.037  Suite: nvmf
00:07:28.037    Test: test_get_rw_params ...passed
00:07:28.037    Test: test_get_rw_ext_params ...passed
00:07:28.037    Test: test_lba_in_range ...passed
00:07:28.037    Test: test_get_dif_ctx ...passed
00:07:28.037    Test: test_nvmf_bdev_ctrlr_identify_ns ...passed
00:07:28.037    Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-12-11 13:44:10.756804] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 522:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch
00:07:28.037  [2024-12-11 13:44:10.757073] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 530:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media
00:07:28.037  passed
00:07:28.037    Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed
00:07:28.037    Test: test_nvmf_bdev_ctrlr_cmd ...[2024-12-11 13:44:10.757112] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 537:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023
00:07:28.037  [2024-12-11 13:44:10.757184] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1041:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media
00:07:28.037  [2024-12-11 13:44:10.757240] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1048:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023
00:07:28.037  passed
00:07:28.037    Test: test_nvmf_bdev_ctrlr_read_write_cmd ...[2024-12-11 13:44:10.757291] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 476:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media
00:07:28.037  [2024-12-11 13:44:10.757334] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 483:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512
00:07:28.037  [2024-12-11 13:44:10.757384] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 575:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib
00:07:28.037  [2024-12-11 13:44:10.757424] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 582:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media
00:07:28.037  passed
00:07:28.037    Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed
00:07:28.037  
00:07:28.037  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.037                suites      1      1    n/a      0        0
00:07:28.037                 tests     10     10     10      0        0
00:07:28.037               asserts    161    161    161      0      n/a
00:07:28.037  
00:07:28.037  Elapsed time =    0.001 seconds
00:07:28.037   13:44:10 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut
00:07:28.037  
00:07:28.037  
00:07:28.037       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.037       http://cunit.sourceforge.net/
00:07:28.037  
00:07:28.037  
00:07:28.037  Suite: nvmf
00:07:28.037    Test: test_discovery_log ...passed
00:07:28.037    Test: test_discovery_log_with_filters ...passed
00:07:28.037  
00:07:28.037  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.037                suites      1      1    n/a      0        0
00:07:28.037                 tests      2      2      2      0        0
00:07:28.037               asserts    238    238    238      0      n/a
00:07:28.037  
00:07:28.037  Elapsed time =    0.004 seconds
00:07:28.296   13:44:10 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut
00:07:28.296  
00:07:28.296  
00:07:28.296       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.296       http://cunit.sourceforge.net/
00:07:28.296  
00:07:28.296  
00:07:28.296  Suite: nvmf
00:07:28.296    Test: nvmf_test_create_subsystem ...[2024-12-11 13:44:10.849027] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix.
00:07:28.296  [2024-12-11 13:44:10.849319] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid
00:07:28.296  [2024-12-11 13:44:10.849448] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long.
00:07:28.296  [2024-12-11 13:44:10.849487] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid
00:07:28.296  [2024-12-11 13:44:10.849528] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter.
00:07:28.296  [2024-12-11 13:44:10.849563] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid
00:07:28.296  [2024-12-11 13:44:10.849614] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter.
00:07:28.296  [2024-12-11 13:44:10.849667] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid
00:07:28.296  [2024-12-11 13:44:10.849717] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol.
00:07:28.296  [2024-12-11 13:44:10.849756] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid
00:07:28.296  [2024-12-11 13:44:10.849795] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter.
00:07:28.296  [2024-12-11 13:44:10.849829] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid
00:07:28.296  [2024-12-11 13:44:10.849960] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223
00:07:28.296  [2024-12-11 13:44:10.849997] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid
00:07:28.296  [2024-12-11 13:44:10.850105] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8.
00:07:28.296  [2024-12-11 13:44:10.850139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid
00:07:28.296  [2024-12-11 13:44:10.850253] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length
00:07:28.296  [2024-12-11 13:44:10.850283] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid
00:07:28.296  [2024-12-11 13:44:10.850329] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:28.296  [2024-12-11 13:44:10.850367] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid
00:07:28.296  [2024-12-11 13:44:10.850403] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:28.296  passed
00:07:28.296    Test: test_spdk_nvmf_subsystem_add_ns ...[2024-12-11 13:44:10.850436] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid
00:07:28.296  passed
00:07:28.296    Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-12-11 13:44:10.850777] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use
00:07:28.296  [2024-12-11 13:44:10.850823] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2103:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295
00:07:28.296  passed
00:07:28.296    Test: test_spdk_nvmf_subsystem_set_sn ...passed
00:07:28.296    Test: test_spdk_nvmf_ns_visible ...[2024-12-11 13:44:10.851037] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2241:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace.
00:07:28.296  [2024-12-11 13:44:10.851383] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11
00:07:28.296  passed
00:07:28.296    Test: test_reservation_register ...[2024-12-11 13:44:10.851985] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_register_with_ptpl ...[2024-12-11 13:44:10.852131] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3288:nvmf_ns_reservation_register: *ERROR*: No registrant
00:07:28.297  passed
00:07:28.297    Test: test_reservation_acquire_preempt_1 ...[2024-12-11 13:44:10.853592] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_acquire_release_with_ptpl ...passed
00:07:28.297    Test: test_reservation_release ...[2024-12-11 13:44:10.855663] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_unregister_notification ...[2024-12-11 13:44:10.855900] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_release_notification ...[2024-12-11 13:44:10.856170] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_release_notification_write_exclusive ...[2024-12-11 13:44:10.856415] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_clear_notification ...[2024-12-11 13:44:10.856612] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_reservation_preempt_notification ...[2024-12-11 13:44:10.856854] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3230:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:28.297  passed
00:07:28.297    Test: test_spdk_nvmf_ns_event ...passed
00:07:28.297    Test: test_nvmf_ns_reservation_add_remove_registrant ...passed
00:07:28.297    Test: test_nvmf_subsystem_add_ctrlr ...passed
00:07:28.297    Test: test_spdk_nvmf_subsystem_add_host ...[2024-12-11 13:44:10.857810] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value
00:07:28.297  [2024-12-11 13:44:10.857886] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport
00:07:28.297  passed
00:07:28.297    Test: test_nvmf_ns_reservation_report ...[2024-12-11 13:44:10.858119] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3593:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again
00:07:28.297  passed
00:07:28.297    Test: test_nvmf_nqn_is_valid ...passed
00:07:28.297    Test: test_nvmf_ns_reservation_restore ...[2024-12-11 13:44:10.858185] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11
00:07:28.297  [2024-12-11 13:44:10.858232] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:c0210874-b1c4-42c8-89c9-db1e359cc5e": uuid is not the correct length
00:07:28.297  [2024-12-11 13:44:10.858261] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter.
00:07:28.297  passed
00:07:28.297    Test: test_nvmf_subsystem_state_change ...[2024-12-11 13:44:10.858384] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2787:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file
00:07:28.297  passed
00:07:28.297    Test: test_nvmf_reservation_custom_ops ...passed
00:07:28.297  
00:07:28.297  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.297                suites      1      1    n/a      0        0
00:07:28.297                 tests     24     24     24      0        0
00:07:28.297               asserts    499    499    499      0      n/a
00:07:28.297  
00:07:28.297  Elapsed time =    0.011 seconds
00:07:28.297   13:44:10 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut
00:07:28.297  
00:07:28.297  
00:07:28.297       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.297       http://cunit.sourceforge.net/
00:07:28.297  
00:07:28.297  
00:07:28.297  Suite: nvmf
00:07:28.297    Test: test_nvmf_tcp_create ...[2024-12-11 13:44:10.940080] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 829:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes
00:07:28.297  passed
00:07:28.297    Test: test_nvmf_tcp_destroy ...passed
00:07:28.297    Test: test_nvmf_tcp_poll_group_create ...passed
00:07:28.297    Test: test_nvmf_tcp_send_c2h_data ...passed
00:07:28.297    Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed
00:07:28.297    Test: test_nvmf_tcp_in_capsule_data_handle ...passed
00:07:28.297    Test: test_nvmf_tcp_qpair_init_mem_resource ...[2024-12-11 13:44:11.057998] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f909cb0 is same with the state(5) to be set
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_send_c2h_term_req ...[2024-12-11 13:44:11.098237] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.098329] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90b030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.098385] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90b030 is same with the state(6) to be set
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed
00:07:28.557    Test: test_nvmf_tcp_icreq_handle ...passed
00:07:28.557    Test: test_nvmf_tcp_check_xfer_type ...passed
00:07:28.557    Test: test_nvmf_tcp_invalid_sgl ...[2024-12-11 13:44:11.098427] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.098460] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90b030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.098548] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2296:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:28.557  [2024-12-11 13:44:11.098604] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.098658] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90d190 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.098691] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2296:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:28.557  [2024-12-11 13:44:11.098718] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90d190 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.098760] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.098795] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90d190 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.098843] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.098878] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f90d190 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.098971] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2705:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_pdu_ch_handle ...[2024-12-11 13:44:11.099002] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099045] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f9116f0 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099106] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2423:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x71d87f80c8e0
00:07:28.557  [2024-12-11 13:44:11.099175] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099214] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099249] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2480:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x71d87f80c030
00:07:28.557  [2024-12-11 13:44:11.099285] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099332] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099368] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2433:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated
00:07:28.557  [2024-12-11 13:44:11.099410] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099444] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099499] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2472:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05
00:07:28.557  [2024-12-11 13:44:11.099541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099577] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099670] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099720] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-12-11 13:44:11.099765] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099811] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099852] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099889] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099921] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.099952] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.099992] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  [2024-12-11 13:44:11.100038] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1236:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:28.557  [2024-12-11 13:44:11.100076] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d87f80c030 is same with the state(6) to be set
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-12-11 13:44:11.135670] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 584:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small!
00:07:28.557  [2024-12-11 13:44:11.135733] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 595:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested!
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-12-11 13:44:11.136738] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 651:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested!
00:07:28.557  [2024-12-11 13:44:11.136797] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 656:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key!
00:07:28.557  passed
00:07:28.557    Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-12-11 13:44:11.137414] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 725:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested!
00:07:28.557  [2024-12-11 13:44:11.137453] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 749:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key!
00:07:28.557  passed
00:07:28.557  
00:07:28.557  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.557                suites      1      1    n/a      0        0
00:07:28.557                 tests     17     17     17      0        0
00:07:28.557               asserts    215    215    215      0      n/a
00:07:28.557  
00:07:28.557  Elapsed time =    0.231 seconds
00:07:28.557   13:44:11 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut
00:07:28.557  
00:07:28.557  
00:07:28.557       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.557       http://cunit.sourceforge.net/
00:07:28.557  
00:07:28.557  
00:07:28.557  Suite: nvmf
00:07:28.557    Test: test_nvmf_tgt_create_poll_group ...passed
00:07:28.557  
00:07:28.557  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.557                suites      1      1    n/a      0        0
00:07:28.557                 tests      1      1      1      0        0
00:07:28.557               asserts     16     16     16      0      n/a
00:07:28.557  
00:07:28.557  Elapsed time =    0.027 seconds
00:07:28.816  
00:07:28.816  real	0m0.648s
00:07:28.816  user	0m0.264s
00:07:28.816  sys	0m0.382s
00:07:28.816   13:44:11 unittest.unittest_nvmf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.816   13:44:11 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x
00:07:28.816  ************************************
00:07:28.816  END TEST unittest_nvmf
00:07:28.816  ************************************
00:07:28.816   13:44:11 unittest -- unit/unittest.sh@245 -- # [[ n == y ]]
00:07:28.816   13:44:11 unittest -- unit/unittest.sh@250 -- # [[ n == y ]]
00:07:28.816   13:44:11 unittest -- unit/unittest.sh@254 -- # run_test unittest_scsi unittest_scsi
00:07:28.816   13:44:11 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.816   13:44:11 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.816   13:44:11 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.816  ************************************
00:07:28.816  START TEST unittest_scsi
00:07:28.816  ************************************
00:07:28.816   13:44:11 unittest.unittest_scsi -- common/autotest_common.sh@1129 -- # unittest_scsi
00:07:28.816   13:44:11 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut
00:07:28.816  
00:07:28.816  
00:07:28.816       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.816       http://cunit.sourceforge.net/
00:07:28.816  
00:07:28.816  
00:07:28.816  Suite: dev_suite
00:07:28.816    Test: dev_destruct_null_dev ...passed
00:07:28.816    Test: dev_destruct_zero_luns ...passed
00:07:28.816    Test: dev_destruct_null_lun ...passed
00:07:28.816    Test: dev_destruct_success ...passed
00:07:28.816    Test: dev_construct_num_luns_zero ...[2024-12-11 13:44:11.411014] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified
00:07:28.816  [2024-12-11 13:44:11.411212] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified
00:07:28.816  [2024-12-11 13:44:11.411249] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0
00:07:28.816  passed
00:07:28.816    Test: dev_construct_no_lun_zero ...passed
00:07:28.816    Test: dev_construct_null_lun ...passed
00:07:28.816    Test: dev_construct_name_too_long ...[2024-12-11 13:44:11.411290] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255
00:07:28.816  passed
00:07:28.817    Test: dev_construct_success ...passed
00:07:28.817    Test: dev_construct_success_lun_zero_not_first ...passed
00:07:28.817    Test: dev_queue_mgmt_task_success ...passed
00:07:28.817    Test: dev_queue_task_success ...passed
00:07:28.817    Test: dev_stop_success ...passed
00:07:28.817    Test: dev_add_port_max_ports ...passed
00:07:28.817    Test: dev_add_port_construct_failure1 ...passed
00:07:28.817    Test: dev_add_port_construct_failure2 ...[2024-12-11 13:44:11.411528] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports
00:07:28.817  [2024-12-11 13:44:11.411565] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c:  49:scsi_port_construct: *ERROR*: port name too long
00:07:28.817  [2024-12-11 13:44:11.411601] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1)
00:07:28.817  passed
00:07:28.817    Test: dev_add_port_success1 ...passed
00:07:28.817    Test: dev_add_port_success2 ...passed
00:07:28.817    Test: dev_add_port_success3 ...passed
00:07:28.817    Test: dev_find_port_by_id_num_ports_zero ...passed
00:07:28.817    Test: dev_find_port_by_id_id_not_found_failure ...passed
00:07:28.817    Test: dev_find_port_by_id_success ...passed
00:07:28.817    Test: dev_add_lun_bdev_not_found ...passed
00:07:28.817    Test: dev_add_lun_no_free_lun_id ...[2024-12-11 13:44:11.412069] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found
00:07:28.817  passed
00:07:28.817    Test: dev_add_lun_success1 ...passed
00:07:28.817    Test: dev_add_lun_success2 ...passed
00:07:28.817    Test: dev_check_pending_tasks ...passed
00:07:28.817    Test: dev_iterate_luns ...passed
00:07:28.817    Test: dev_find_free_lun ...passed
00:07:28.817  
00:07:28.817  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.817                suites      1      1    n/a      0        0
00:07:28.817                 tests     29     29     29      0        0
00:07:28.817               asserts     97     97     97      0      n/a
00:07:28.817  
00:07:28.817  Elapsed time =    0.002 seconds
00:07:28.817   13:44:11 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut
00:07:28.817  
00:07:28.817  
00:07:28.817       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.817       http://cunit.sourceforge.net/
00:07:28.817  
00:07:28.817  
00:07:28.817  Suite: lun_suite
00:07:28.817    Test: lun_task_mgmt_execute_abort_task_not_supported ...passed
00:07:28.817    Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed
00:07:28.817    Test: lun_task_mgmt_execute_lun_reset ...[2024-12-11 13:44:11.450559] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported
00:07:28.817  [2024-12-11 13:44:11.450925] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported
00:07:28.817  passed
00:07:28.817    Test: lun_task_mgmt_execute_target_reset ...passed
00:07:28.817    Test: lun_task_mgmt_execute_invalid_case ...passed
00:07:28.817    Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed
00:07:28.817    Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed
00:07:28.817    Test: lun_append_task_null_lun_not_supported ...passed
00:07:28.817    Test: lun_execute_scsi_task_pending ...passed
00:07:28.817    Test: lun_execute_scsi_task_complete ...passed
00:07:28.817    Test: lun_execute_scsi_task_resize ...[2024-12-11 13:44:11.451088] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported
00:07:28.817  passed
00:07:28.817    Test: lun_destruct_success ...passed
00:07:28.817    Test: lun_construct_null_ctx ...passed
00:07:28.817    Test: lun_construct_success ...passed
00:07:28.817    Test: lun_reset_task_wait_scsi_task_complete ...passed
00:07:28.817    Test: lun_reset_task_suspend_scsi_task ...passed
00:07:28.817    Test: lun_check_pending_tasks_only_for_specific_initiator ...[2024-12-11 13:44:11.451326] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL
00:07:28.817  passed
00:07:28.817    Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed
00:07:28.817  
00:07:28.817  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.817                suites      1      1    n/a      0        0
00:07:28.817                 tests     18     18     18      0        0
00:07:28.817               asserts    153    153    153      0      n/a
00:07:28.817  
00:07:28.817  Elapsed time =    0.001 seconds
00:07:28.817   13:44:11 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut
00:07:28.817  
00:07:28.817  
00:07:28.817       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.817       http://cunit.sourceforge.net/
00:07:28.817  
00:07:28.817  
00:07:28.817  Suite: scsi_suite
00:07:28.817    Test: scsi_init ...passed
00:07:28.817  
00:07:28.817  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.817                suites      1      1    n/a      0        0
00:07:28.817                 tests      1      1      1      0        0
00:07:28.817               asserts      1      1      1      0      n/a
00:07:28.817  
00:07:28.817  Elapsed time =    0.000 seconds
00:07:28.817   13:44:11 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut
00:07:28.817  
00:07:28.817  
00:07:28.817       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.817       http://cunit.sourceforge.net/
00:07:28.817  
00:07:28.817  
00:07:28.817  Suite: translation_suite
00:07:28.817    Test: mode_select_6_test ...passed
00:07:28.817    Test: mode_select_6_test2 ...passed
00:07:28.817    Test: mode_sense_6_test ...passed
00:07:28.817    Test: mode_sense_10_test ...passed
00:07:28.817    Test: inquiry_evpd_test ...passed
00:07:28.817    Test: inquiry_standard_test ...passed
00:07:28.817    Test: inquiry_overflow_test ...passed
00:07:28.817    Test: task_complete_test ...passed
00:07:28.817    Test: lba_range_test ...passed
00:07:28.817    Test: xfer_len_test ...passed
00:07:28.817    Test: xfer_test ...passed
00:07:28.817    Test: scsi_name_padding_test ...passed
00:07:28.817    Test: get_dif_ctx_test ...passed
00:07:28.817    Test: unmap_split_test ...passed
00:07:28.817  
00:07:28.817  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.817                suites      1      1    n/a      0        0
00:07:28.817                 tests     14     14     14      0        0
00:07:28.817               asserts   1205   1205   1205      0      n/a
00:07:28.817  
00:07:28.817  Elapsed time =    0.006 seconds
00:07:28.817  [2024-12-11 13:44:11.521975] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192
00:07:28.817   13:44:11 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut
00:07:28.817  
00:07:28.817  
00:07:28.817       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.817       http://cunit.sourceforge.net/
00:07:28.817  
00:07:28.817  
00:07:28.817  Suite: reservation_suite
00:07:28.817    Test: test_reservation_register ...passed
00:07:28.817    Test: test_reservation_reserve ...[2024-12-11 13:44:11.554307] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  [2024-12-11 13:44:11.554655] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  passed
00:07:28.817    Test: test_all_registrant_reservation_reserve ...[2024-12-11 13:44:11.554759] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1
00:07:28.817  [2024-12-11 13:44:11.554816] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match
00:07:28.817  [2024-12-11 13:44:11.554902] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  passed
00:07:28.817    Test: test_all_registrant_reservation_access ...passed
00:07:28.817    Test: test_reservation_preempt_non_all_regs ...passed
00:07:28.817    Test: test_reservation_preempt_all_regs ...passed
00:07:28.817    Test: test_reservation_cmds_conflict ...passed
00:07:28.817    Test: test_scsi2_reserve_release ...passed
00:07:28.817    Test: test_pr_with_scsi2_reserve_release ...passed[2024-12-11 13:44:11.555103] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  [2024-12-11 13:44:11.555168] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0x8
00:07:28.817  [2024-12-11 13:44:11.555208] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0xaa
00:07:28.817  [2024-12-11 13:44:11.555285] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  [2024-12-11 13:44:11.555347] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey
00:07:28.817  [2024-12-11 13:44:11.555453] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  [2024-12-11 13:44:11.555554] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  [2024-12-11 13:44:11.555652] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type  reject command 0x2a
00:07:28.817  [2024-12-11 13:44:11.555688] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:28.817  [2024-12-11 13:44:11.555738] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:28.817  [2024-12-11 13:44:11.555770] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:28.817  [2024-12-11 13:44:11.555809] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:28.817  [2024-12-11 13:44:11.555893] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:28.817  
00:07:28.817  
00:07:28.817  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.817                suites      1      1    n/a      0        0
00:07:28.817                 tests      9      9      9      0        0
00:07:28.817               asserts    344    344    344      0      n/a
00:07:28.817  
00:07:28.817  Elapsed time =    0.002 seconds
00:07:28.817  
00:07:28.817  real	0m0.187s
00:07:28.817  user	0m0.082s
00:07:28.817  sys	0m0.105s
00:07:28.817   13:44:11 unittest.unittest_scsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.817   13:44:11 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x
00:07:28.817  ************************************
00:07:28.817  END TEST unittest_scsi
00:07:28.817  ************************************
00:07:29.076    13:44:11 unittest -- unit/unittest.sh@255 -- # uname -s
00:07:29.076   13:44:11 unittest -- unit/unittest.sh@255 -- # '[' Linux = Linux ']'
00:07:29.076   13:44:11 unittest -- unit/unittest.sh@258 -- # run_test unittest_sock unittest_sock
00:07:29.076   13:44:11 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:29.076   13:44:11 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.076   13:44:11 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:29.076  ************************************
00:07:29.076  START TEST unittest_sock
00:07:29.076  ************************************
00:07:29.076   13:44:11 unittest.unittest_sock -- common/autotest_common.sh@1129 -- # unittest_sock
00:07:29.076   13:44:11 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut
00:07:29.076  
00:07:29.076  
00:07:29.076       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.076       http://cunit.sourceforge.net/
00:07:29.076  
00:07:29.076  
00:07:29.076  Suite: sock
00:07:29.076    Test: posix_sock ...passed
00:07:29.076    Test: ut_sock ...passed
00:07:29.076    Test: posix_sock_group ...passed
00:07:29.076    Test: ut_sock_group ...passed
00:07:29.076    Test: posix_sock_group_fairness ...passed
00:07:29.076    Test: _posix_sock_close ...passed
00:07:29.076    Test: sock_get_default_opts ...passed
00:07:29.076    Test: ut_sock_impl_get_set_opts ...passed
00:07:29.076    Test: posix_sock_impl_get_set_opts ...passed
00:07:29.076    Test: ut_sock_map ...passed
00:07:29.076    Test: override_impl_opts ...passed
00:07:29.076    Test: ut_sock_group_get_ctx ...passed
00:07:29.076    Test: posix_get_interface_name ...passed
00:07:29.076  
00:07:29.076  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.076                suites      1      1    n/a      0        0
00:07:29.076                 tests     13     13     13      0        0
00:07:29.076               asserts    360    360    360      0      n/a
00:07:29.076  
00:07:29.076  Elapsed time =    0.011 seconds
00:07:29.076   13:44:11 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut
00:07:29.076  
00:07:29.076  
00:07:29.076       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.076       http://cunit.sourceforge.net/
00:07:29.076  
00:07:29.076  
00:07:29.076  Suite: posix
00:07:29.076    Test: flush ...passed
00:07:29.076  
00:07:29.076  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.076                suites      1      1    n/a      0        0
00:07:29.076                 tests      1      1      1      0        0
00:07:29.076               asserts     28     28     28      0      n/a
00:07:29.076  
00:07:29.076  Elapsed time =    0.000 seconds
00:07:29.076   13:44:11 unittest.unittest_sock -- unit/unittest.sh@128 -- # [[ n == y ]]
00:07:29.076  
00:07:29.076  real	0m0.125s
00:07:29.076  user	0m0.045s
00:07:29.076  sys	0m0.056s
00:07:29.076   13:44:11 unittest.unittest_sock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:29.076  ************************************
00:07:29.076  END TEST unittest_sock
00:07:29.076  ************************************
00:07:29.076   13:44:11 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x
00:07:29.076   13:44:11 unittest -- unit/unittest.sh@260 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:29.076   13:44:11 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:29.076   13:44:11 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.076   13:44:11 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:29.076  ************************************
00:07:29.076  START TEST unittest_thread
00:07:29.076  ************************************
00:07:29.076   13:44:11 unittest.unittest_thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:29.076  
00:07:29.076  
00:07:29.076       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.076       http://cunit.sourceforge.net/
00:07:29.076  
00:07:29.076  
00:07:29.076  Suite: io_channel
00:07:29.076    Test: thread_alloc ...passed
00:07:29.076    Test: thread_send_msg ...passed
00:07:29.076    Test: thread_poller ...passed
00:07:29.076    Test: poller_pause ...passed
00:07:29.076    Test: thread_for_each ...passed
00:07:29.076    Test: for_each_channel_remove ...passed
00:07:29.076    Test: for_each_channel_unreg ...passed
00:07:29.076    Test: thread_name ...[2024-12-11 13:44:11.841700] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2222:spdk_io_device_register: *ERROR*: io_device 0x78c85c709640 already registered (old:0x513000000200 new:0x5130000003c0)
00:07:29.076  passed
00:07:29.076    Test: channel ...[2024-12-11 13:44:11.845654] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2356:spdk_get_io_channel: *ERROR*: could not find io_device 0x594f17102340
00:07:29.076  passed
00:07:29.076    Test: channel_destroy_races ...passed
00:07:29.076    Test: thread_exit_test ...[2024-12-11 13:44:11.850936] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 664:thread_exit: *ERROR*: thread 0x519000007380 got timeout, and move it to the exited state forcefully
00:07:29.076  passed
00:07:29.335    Test: thread_update_stats_test ...passed
00:07:29.335    Test: nested_channel ...passed
00:07:29.335    Test: device_unregister_and_thread_exit_race ...passed
00:07:29.335    Test: cache_closest_timed_poller ...passed
00:07:29.335    Test: multi_timed_pollers_have_same_expiration ...passed
00:07:29.335    Test: io_device_lookup ...passed
00:07:29.335    Test: spdk_spin ...[2024-12-11 13:44:11.861675] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3213:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:29.335  [2024-12-11 13:44:11.861728] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x78c85c70a020
00:07:29.335  [2024-12-11 13:44:11.861756] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3251:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:29.335  [2024-12-11 13:44:11.863328] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3214:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:07:29.335  [2024-12-11 13:44:11.863377] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x78c85c70a020
00:07:29.335  [2024-12-11 13:44:11.863405] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3234:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:29.335  [2024-12-11 13:44:11.863434] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x78c85c70a020
00:07:29.335  [2024-12-11 13:44:11.863461] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3234:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:29.335  [2024-12-11 13:44:11.863491] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x78c85c70a020
00:07:29.335  [2024-12-11 13:44:11.863518] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3195:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0))
00:07:29.335  [2024-12-11 13:44:11.863545] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x78c85c70a020
00:07:29.335  passed
00:07:29.335    Test: for_each_channel_and_thread_exit_race ...passed
00:07:29.335    Test: for_each_thread_and_thread_exit_race ...passed
00:07:29.335    Test: poller_get_name ...passed
00:07:29.335    Test: poller_get_id ...passed
00:07:29.335    Test: poller_get_state_str ...passed
00:07:29.335    Test: poller_get_period_ticks ...passed
00:07:29.335    Test: poller_get_stats ...passed
00:07:29.335  
00:07:29.335  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.335                suites      1      1    n/a      0        0
00:07:29.335                 tests     25     25     25      0        0
00:07:29.335               asserts    429    429    429      0      n/a
00:07:29.335  
00:07:29.335  Elapsed time =    0.056 seconds
00:07:29.335  ************************************
00:07:29.335  END TEST unittest_thread
00:07:29.335  ************************************
00:07:29.335  
00:07:29.335  real	0m0.099s
00:07:29.335  user	0m0.067s
00:07:29.335  sys	0m0.031s
00:07:29.335   13:44:11 unittest.unittest_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:29.335   13:44:11 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x
00:07:29.335   13:44:11 unittest -- unit/unittest.sh@261 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:29.335   13:44:11 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:29.335   13:44:11 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.335   13:44:11 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:29.335  ************************************
00:07:29.335  START TEST unittest_iobuf
00:07:29.335  ************************************
00:07:29.335   13:44:11 unittest.unittest_iobuf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:29.335  
00:07:29.335  
00:07:29.335       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.335       http://cunit.sourceforge.net/
00:07:29.335  
00:07:29.335  
00:07:29.335  Suite: io_channel
00:07:29.335    Test: iobuf ...passed
00:07:29.336    Test: iobuf_cache ...[2024-12-11 13:44:11.970874] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:29.336  [2024-12-11 13:44:11.971147] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:29.336  [2024-12-11 13:44:11.971257] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4)
00:07:29.336  [2024-12-11 13:44:11.971303] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:29.336  [2024-12-11 13:44:11.971378] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:29.336  [2024-12-11 13:44:11.971423] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:29.336  passed
00:07:29.336    Test: iobuf_priority ...passed
00:07:29.336  
00:07:29.336  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.336                suites      1      1    n/a      0        0
00:07:29.336                 tests      3      3      3      0        0
00:07:29.336               asserts    127    127    127      0      n/a
00:07:29.336  
00:07:29.336  Elapsed time =    0.010 seconds
00:07:29.336  
00:07:29.336  real	0m0.049s
00:07:29.336  user	0m0.030s
00:07:29.336  sys	0m0.019s
00:07:29.336   13:44:11 unittest.unittest_iobuf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:29.336  ************************************
00:07:29.336  END TEST unittest_iobuf
00:07:29.336  ************************************
00:07:29.336   13:44:11 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x
00:07:29.336   13:44:12 unittest -- unit/unittest.sh@262 -- # run_test unittest_util unittest_util
00:07:29.336   13:44:12 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:29.336   13:44:12 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.336   13:44:12 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:29.336  ************************************
00:07:29.336  START TEST unittest_util
00:07:29.336  ************************************
00:07:29.336   13:44:12 unittest.unittest_util -- common/autotest_common.sh@1129 -- # unittest_util
00:07:29.336   13:44:12 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut
00:07:29.336  
00:07:29.336  
00:07:29.336       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.336       http://cunit.sourceforge.net/
00:07:29.336  
00:07:29.336  
00:07:29.336  Suite: base64
00:07:29.336    Test: test_base64_get_encoded_strlen ...passed
00:07:29.336    Test: test_base64_get_decoded_len ...passed
00:07:29.336    Test: test_base64_encode ...passed
00:07:29.336    Test: test_base64_decode ...passed
00:07:29.336    Test: test_base64_urlsafe_encode ...passed
00:07:29.336    Test: test_base64_urlsafe_decode ...passed
00:07:29.336  
00:07:29.336  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.336                suites      1      1    n/a      0        0
00:07:29.336                 tests      6      6      6      0        0
00:07:29.336               asserts    112    112    112      0      n/a
00:07:29.336  
00:07:29.336  Elapsed time =    0.000 seconds
00:07:29.336   13:44:12 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut
00:07:29.336  
00:07:29.336  
00:07:29.336       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.336       http://cunit.sourceforge.net/
00:07:29.336  
00:07:29.336  
00:07:29.336  Suite: bit_array
00:07:29.336    Test: test_1bit ...passed
00:07:29.336    Test: test_64bit ...passed
00:07:29.336    Test: test_find ...passed
00:07:29.336    Test: test_resize ...passed
00:07:29.336    Test: test_errors ...passed
00:07:29.336    Test: test_count ...passed
00:07:29.336    Test: test_mask_store_load ...passed
00:07:29.336    Test: test_mask_clear ...passed
00:07:29.336  
00:07:29.336  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.336                suites      1      1    n/a      0        0
00:07:29.336                 tests      8      8      8      0        0
00:07:29.336               asserts   5075   5075   5075      0      n/a
00:07:29.336  
00:07:29.336  Elapsed time =    0.002 seconds
00:07:29.336   13:44:12 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: cpuset
00:07:29.595    Test: test_cpuset ...passed
00:07:29.595    Test: test_cpuset_parse ...[2024-12-11 13:44:12.122489] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '['
00:07:29.595  [2024-12-11 13:44:12.122722] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']'
00:07:29.595  [2024-12-11 13:44:12.122757] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-'
00:07:29.595  [2024-12-11 13:44:12.122791] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10)
00:07:29.595  passed
00:07:29.595    Test: test_cpuset_fmt ...passed
00:07:29.595    Test: test_cpuset_foreach ...passed
00:07:29.595  
00:07:29.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.595                suites      1      1    n/a      0        0
00:07:29.595                 tests      4      4      4      0        0
00:07:29.595               asserts     90     90     90      0      n/a
00:07:29.595  
00:07:29.595  Elapsed time =    0.002 seconds
00:07:29.595  [2024-12-11 13:44:12.123042] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ','
00:07:29.595  [2024-12-11 13:44:12.123085] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ','
00:07:29.595  [2024-12-11 13:44:12.123112] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]'
00:07:29.595  [2024-12-11 13:44:12.123143] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed
00:07:29.595   13:44:12 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: crc16
00:07:29.595    Test: test_crc16_t10dif ...passed
00:07:29.595    Test: test_crc16_t10dif_seed ...passed
00:07:29.595    Test: test_crc16_t10dif_copy ...passed
00:07:29.595  
00:07:29.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.595                suites      1      1    n/a      0        0
00:07:29.595                 tests      3      3      3      0        0
00:07:29.595               asserts      5      5      5      0      n/a
00:07:29.595  
00:07:29.595  Elapsed time =    0.000 seconds
00:07:29.595   13:44:12 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: crc32_ieee
00:07:29.595    Test: test_crc32_ieee ...passed
00:07:29.595  
00:07:29.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.595                suites      1      1    n/a      0        0
00:07:29.595                 tests      1      1      1      0        0
00:07:29.595               asserts      1      1      1      0      n/a
00:07:29.595  
00:07:29.595  Elapsed time =    0.000 seconds
00:07:29.595   13:44:12 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: crc32c
00:07:29.595    Test: test_crc32c ...passed
00:07:29.595    Test: test_crc32c_nvme ...passed
00:07:29.595  
00:07:29.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.595                suites      1      1    n/a      0        0
00:07:29.595                 tests      2      2      2      0        0
00:07:29.595               asserts     16     16     16      0      n/a
00:07:29.595  
00:07:29.595  Elapsed time =    0.000 seconds
00:07:29.595   13:44:12 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: crc64
00:07:29.595    Test: test_crc64_nvme ...passed
00:07:29.595  
00:07:29.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.595                suites      1      1    n/a      0        0
00:07:29.595                 tests      1      1      1      0        0
00:07:29.595               asserts      4      4      4      0      n/a
00:07:29.595  
00:07:29.595  Elapsed time =    0.000 seconds
00:07:29.595   13:44:12 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: string
00:07:29.595    Test: test_parse_ip_addr ...passed
00:07:29.595    Test: test_str_chomp ...passed
00:07:29.595    Test: test_parse_capacity ...passed
00:07:29.595    Test: test_sprintf_append_realloc ...passed
00:07:29.595    Test: test_strtol ...passed
00:07:29.595    Test: test_strtoll ...passed
00:07:29.595    Test: test_strarray ...passed
00:07:29.595    Test: test_strcpy_replace ...passed
00:07:29.595  
00:07:29.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:29.595                suites      1      1    n/a      0        0
00:07:29.595                 tests      8      8      8      0        0
00:07:29.595               asserts    161    161    161      0      n/a
00:07:29.595  
00:07:29.595  Elapsed time =    0.001 seconds
00:07:29.595   13:44:12 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut
00:07:29.595  
00:07:29.595  
00:07:29.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:29.595       http://cunit.sourceforge.net/
00:07:29.595  
00:07:29.595  
00:07:29.595  Suite: dif
00:07:29.595    Test: dif_generate_and_verify_test ...[2024-12-11 13:44:12.310268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:29.595  [2024-12-11 13:44:12.310912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:29.595  [2024-12-11 13:44:12.311347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:29.596  [2024-12-11 13:44:12.311774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:29.596  [2024-12-11 13:44:12.312162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:29.596  [2024-12-11 13:44:12.312439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:29.596  passed
00:07:29.596    Test: dif_disable_check_test ...[2024-12-11 13:44:12.313447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:29.596  [2024-12-11 13:44:12.313890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:29.596  [2024-12-11 13:44:12.314168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:29.596  passed
00:07:29.596    Test: dif_generate_and_verify_different_pi_formats_test ...[2024-12-11 13:44:12.315204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a80000, Actual=b9848de
00:07:29.596  [2024-12-11 13:44:12.315525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b98, Actual=b0a8
00:07:29.596  [2024-12-11 13:44:12.315845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a8000000000000, Actual=81039fcf5685d8d4
00:07:29.596  [2024-12-11 13:44:12.316131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b9848de00000000, Actual=81039fcf5685d8d4
00:07:29.596  [2024-12-11 13:44:12.316413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:29.596  [2024-12-11 13:44:12.316740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:29.596  [2024-12-11 13:44:12.317055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:29.596  [2024-12-11 13:44:12.317371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:29.596  [2024-12-11 13:44:12.317683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:29.596  [2024-12-11 13:44:12.318017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:29.596  [2024-12-11 13:44:12.318280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:29.596  passed
00:07:29.596    Test: dif_apptag_mask_test ...[2024-12-11 13:44:12.318634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:29.596  [2024-12-11 13:44:12.318966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:29.596  passed
00:07:29.596    Test: dif_sec_8_md_8_error_test ...passed
00:07:29.596    Test: dif_sec_512_md_0_error_test ...passed
00:07:29.596    Test: dif_sec_512_md_16_error_test ...passed
00:07:29.596    Test: dif_sec_4096_md_0_8_error_test ...passed
00:07:29.596    Test: dif_sec_4100_md_128_error_test ...passed
00:07:29.596    Test: dif_guard_seed_test ...passed
00:07:29.596    Test: dif_guard_value_test ...[2024-12-11 13:44:12.319157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 615:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:07:29.596  [2024-12-11 13:44:12.319212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.596  [2024-12-11 13:44:12.319265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:29.596  [2024-12-11 13:44:12.319298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:29.596  [2024-12-11 13:44:12.319329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.596  [2024-12-11 13:44:12.319353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.596  [2024-12-11 13:44:12.319388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.596  [2024-12-11 13:44:12.319427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.596  [2024-12-11 13:44:12.319490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:29.596  [2024-12-11 13:44:12.319530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:29.596  passed
00:07:29.596    Test: dif_disable_sec_512_md_8_single_iov_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:29.596    Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:29.596    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-11 13:44:12.363023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=ff4c, Actual=fd4c
00:07:29.596  [2024-12-11 13:44:12.365630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=fc21, Actual=fe21
00:07:29.596  [2024-12-11 13:44:12.368211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.596  [2024-12-11 13:44:12.370715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.373237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.857  [2024-12-11 13:44:12.375722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.857  [2024-12-11 13:44:12.378240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=fd4c, Actual=7009
00:07:29.857  [2024-12-11 13:44:12.380191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=fe21, Actual=ec28
00:07:29.857  [2024-12-11 13:44:12.382316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=18b753ed, Actual=1ab753ed
00:07:29.857  [2024-12-11 13:44:12.384779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=3a574660, Actual=38574660
00:07:29.857  [2024-12-11 13:44:12.386980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.389502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.391931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.857  [2024-12-11 13:44:12.394338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.857  [2024-12-11 13:44:12.396900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.857  [2024-12-11 13:44:12.399021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=38574660, Actual=ba1a201f
00:07:29.857  [2024-12-11 13:44:12.401100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.857  [2024-12-11 13:44:12.403690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=88010a2d4a37a266, Actual=88010a2d4837a266
00:07:29.857  [2024-12-11 13:44:12.406257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.408855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.411314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=2000000005e
00:07:29.857  [2024-12-11 13:44:12.413788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=2000000005e
00:07:29.857  [2024-12-11 13:44:12.416086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.857  [2024-12-11 13:44:12.417866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=88010a2d4837a266, Actual=c1ec7a2d39b1571d
00:07:29.857  passed
00:07:29.857    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-12-11 13:44:12.418893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:29.857  [2024-12-11 13:44:12.419172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fc21, Actual=fe21
00:07:29.857  [2024-12-11 13:44:12.419447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.419712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.420135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.420419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.420746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:29.857  [2024-12-11 13:44:12.420954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=ec28
00:07:29.857  [2024-12-11 13:44:12.421176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:29.857  [2024-12-11 13:44:12.421484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3a574660, Actual=38574660
00:07:29.857  [2024-12-11 13:44:12.421799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.422080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.422355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.422622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.422910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.857  [2024-12-11 13:44:12.423245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=ba1a201f
00:07:29.857  [2024-12-11 13:44:12.423455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.857  [2024-12-11 13:44:12.423749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4a37a266, Actual=88010a2d4837a266
00:07:29.857  [2024-12-11 13:44:12.424023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.424278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.424585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.857  [2024-12-11 13:44:12.424905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.857  [2024-12-11 13:44:12.425189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.857  passed
00:07:29.857    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-12-11 13:44:12.425396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=c1ec7a2d39b1571d
00:07:29.857  [2024-12-11 13:44:12.425682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:29.857  [2024-12-11 13:44:12.425996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fc21, Actual=fe21
00:07:29.857  [2024-12-11 13:44:12.426283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.426590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.426885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.427157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.427412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:29.857  [2024-12-11 13:44:12.427602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=ec28
00:07:29.857  [2024-12-11 13:44:12.427838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:29.857  [2024-12-11 13:44:12.428091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3a574660, Actual=38574660
00:07:29.857  [2024-12-11 13:44:12.428369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.428629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.428916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.429195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.857  [2024-12-11 13:44:12.429617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.857  [2024-12-11 13:44:12.429827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=ba1a201f
00:07:29.857  [2024-12-11 13:44:12.430016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.857  [2024-12-11 13:44:12.430289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4a37a266, Actual=88010a2d4837a266
00:07:29.857  [2024-12-11 13:44:12.430551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.857  [2024-12-11 13:44:12.430818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.431074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.431333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.431585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.858  [2024-12-11 13:44:12.431829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=c1ec7a2d39b1571d
00:07:29.858  passed
00:07:29.858    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-12-11 13:44:12.432077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:29.858  [2024-12-11 13:44:12.432342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fc21, Actual=fe21
00:07:29.858  [2024-12-11 13:44:12.432779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.433074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.433421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.433734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.434026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:29.858  [2024-12-11 13:44:12.434229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=ec28
00:07:29.858  [2024-12-11 13:44:12.434441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:29.858  [2024-12-11 13:44:12.434738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3a574660, Actual=38574660
00:07:29.858  [2024-12-11 13:44:12.435019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.435282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.435552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.435960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.436247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.858  [2024-12-11 13:44:12.436456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=ba1a201f
00:07:29.858  [2024-12-11 13:44:12.436693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.858  [2024-12-11 13:44:12.436994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4a37a266, Actual=88010a2d4837a266
00:07:29.858  [2024-12-11 13:44:12.437292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.437558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.437800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.438027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.438252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.858  passed
00:07:29.858    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-12-11 13:44:12.438453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=c1ec7a2d39b1571d
00:07:29.858  [2024-12-11 13:44:12.438670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:29.858  [2024-12-11 13:44:12.438925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fc21, Actual=fe21
00:07:29.858  [2024-12-11 13:44:12.439175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.439409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.439675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.439916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.440149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:29.858  passed
00:07:29.858    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-12-11 13:44:12.440321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=ec28
00:07:29.858  [2024-12-11 13:44:12.440520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:29.858  [2024-12-11 13:44:12.440754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3a574660, Actual=38574660
00:07:29.858  [2024-12-11 13:44:12.440982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.441211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.441454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.441697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.441949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.858  [2024-12-11 13:44:12.442166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=ba1a201f
00:07:29.858  [2024-12-11 13:44:12.442353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.858  [2024-12-11 13:44:12.442584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4a37a266, Actual=88010a2d4837a266
00:07:29.858  [2024-12-11 13:44:12.442860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.443081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.443310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.443530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.443766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.858  [2024-12-11 13:44:12.443932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=c1ec7a2d39b1571d
00:07:29.858  passed
00:07:29.858    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-12-11 13:44:12.444124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:29.858  [2024-12-11 13:44:12.444338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fc21, Actual=fe21
00:07:29.858  [2024-12-11 13:44:12.444569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.444821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.445159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.445393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.445661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:29.858  passed
00:07:29.858    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-12-11 13:44:12.445825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=ec28
00:07:29.858  [2024-12-11 13:44:12.446024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:29.858  [2024-12-11 13:44:12.446258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3a574660, Actual=38574660
00:07:29.858  [2024-12-11 13:44:12.446484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.446716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.446940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.447163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.858  [2024-12-11 13:44:12.447377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.858  [2024-12-11 13:44:12.447554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=ba1a201f
00:07:29.858  [2024-12-11 13:44:12.447774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.858  [2024-12-11 13:44:12.448000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4a37a266, Actual=88010a2d4837a266
00:07:29.858  [2024-12-11 13:44:12.448344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.448569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.858  [2024-12-11 13:44:12.448807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.449037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.858  [2024-12-11 13:44:12.449267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.858  passed
00:07:29.858    Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-12-11 13:44:12.449446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=c1ec7a2d39b1571d
00:07:29.859  passed
00:07:29.859    Test: dif_copy_sec_512_md_8_dif_disable_single_iov ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:29.859    Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed
00:07:29.859    Test: nvme_pract_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:29.859    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:29.859    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:29.859    Test: dif_copy_sec_512_md_8_prchk_7_multi_bounce_iovs_complex_splits ...passed
00:07:29.859    Test: dif_copy_sec_512_md_8_dif_disable_multi_bounce_iovs_complex_splits ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:29.859    Test: nvme_pract_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:29.859    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-11 13:44:12.553052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=ff4c, Actual=fd4c
00:07:29.859  [2024-12-11 13:44:12.554231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=f5e, Actual=d5e
00:07:29.859  [2024-12-11 13:44:12.555432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.556782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.558012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.859  [2024-12-11 13:44:12.559197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.859  [2024-12-11 13:44:12.560480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=fd4c, Actual=7009
00:07:29.859  [2024-12-11 13:44:12.561669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=ba8c, Actual=a885
00:07:29.859  [2024-12-11 13:44:12.562891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=18b753ed, Actual=1ab753ed
00:07:29.859  [2024-12-11 13:44:12.564119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=9ea26cbb, Actual=9ca26cbb
00:07:29.859  [2024-12-11 13:44:12.565326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.566473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.567753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.859  [2024-12-11 13:44:12.568886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:29.859  [2024-12-11 13:44:12.570262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.859  [2024-12-11 13:44:12.571426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=c91054db, Actual=4b5d32a4
00:07:29.859  [2024-12-11 13:44:12.572587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.859  [2024-12-11 13:44:12.574067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=478d8185b93adcd6, Actual=478d8185bb3adcd6
00:07:29.859  [2024-12-11 13:44:12.575221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.576398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.577741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=2000000005e
00:07:29.859  [2024-12-11 13:44:12.578957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=2000000005e
00:07:29.859  [2024-12-11 13:44:12.580050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBApassed
00:07:29.859    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...=94,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.859  [2024-12-11 13:44:12.581294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=2d19b1684f09bf67, Actual=64f4c1683e8f4a1c
00:07:29.859  [2024-12-11 13:44:12.581932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:29.859  [2024-12-11 13:44:12.582227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=98c4, Actual=9ac4
00:07:29.859  [2024-12-11 13:44:12.582815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.583119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.583718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.859  [2024-12-11 13:44:12.583966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.859  [2024-12-11 13:44:12.584477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:29.859  [2024-12-11 13:44:12.584753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=3f1f
00:07:29.859  [2024-12-11 13:44:12.585265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:29.859  [2024-12-11 13:44:12.585549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2078a4ef, Actual=2278a4ef
00:07:29.859  [2024-12-11 13:44:12.586080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.586344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.586847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.859  [2024-12-11 13:44:12.587096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:29.859  [2024-12-11 13:44:12.587633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:29.859  [2024-12-11 13:44:12.588060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=77ca9c8f, Actual=f587faf0
00:07:29.859  [2024-12-11 13:44:12.588493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:29.859  [2024-12-11 13:44:12.588978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=52f21af6a0e95313, Actual=52f21af6a2e95313
00:07:29.859  [2024-12-11 13:44:12.589429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.589901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:29.859  [2024-12-11 13:44:12.590446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.859  [2024-12-11 13:44:12.590751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:29.859  [2024-12-11 13:44:12.591167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:29.859  passed
00:07:29.859    Test: dix_sec_0_md_8_error ...passed
00:07:29.859    Test: dix_sec_512_md_0_error ...passed
00:07:29.859    Test: dix_sec_512_md_16_error ...passed
00:07:29.859    Test: dix_sec_4096_md_0_8_error ...passed
00:07:29.859    Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-12-11 13:44:12.591667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=718b5a1b275cc5d9
00:07:29.859  [2024-12-11 13:44:12.591778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 615:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:07:29.859  [2024-12-11 13:44:12.591821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.859  [2024-12-11 13:44:12.591860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:29.859  [2024-12-11 13:44:12.591894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:29.859  [2024-12-11 13:44:12.591923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.859  [2024-12-11 13:44:12.591942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.859  [2024-12-11 13:44:12.591980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.859  [2024-12-11 13:44:12.592009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:29.859  passed
00:07:29.859    Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:29.859    Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:29.859    Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:29.859    Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:29.859    Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:29.859    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:29.859    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:29.859    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:29.859    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-11 13:44:12.632725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=ff4c, Actual=fd4c
00:07:30.118  [2024-12-11 13:44:12.634171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=f5e, Actual=d5e
00:07:30.118  [2024-12-11 13:44:12.635740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:30.118  [2024-12-11 13:44:12.637096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:30.118  [2024-12-11 13:44:12.638628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:30.118  [2024-12-11 13:44:12.639832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:30.118  [2024-12-11 13:44:12.641042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=fd4c, Actual=7009
00:07:30.118  [2024-12-11 13:44:12.642169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=ba8c, Actual=a885
00:07:30.118  [2024-12-11 13:44:12.643191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=18b753ed, Actual=1ab753ed
00:07:30.119  [2024-12-11 13:44:12.644200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=9ea26cbb, Actual=9ca26cbb
00:07:30.119  [2024-12-11 13:44:12.645322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.646335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.647332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:30.119  [2024-12-11 13:44:12.648373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e
00:07:30.119  [2024-12-11 13:44:12.649382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=1ab753ed, Actual=2b4171a5
00:07:30.119  [2024-12-11 13:44:12.650425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=c91054db, Actual=4b5d32a4
00:07:30.119  [2024-12-11 13:44:12.651475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:30.119  [2024-12-11 13:44:12.652556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=478d8185b93adcd6, Actual=478d8185bb3adcd6
00:07:30.119  [2024-12-11 13:44:12.653617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.654688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.655695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=2000000005e
00:07:30.119  [2024-12-11 13:44:12.656780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=2000000005e
00:07:30.119  [2024-12-11 13:44:12.657924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:30.119  passed
00:07:30.119    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-11 13:44:12.658981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94,  Expected=2d19b1684f09bf67, Actual=64f4c1683e8f4a1c
00:07:30.119  [2024-12-11 13:44:12.659392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=ff4c, Actual=fd4c
00:07:30.119  [2024-12-11 13:44:12.659654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=98c4, Actual=9ac4
00:07:30.119  [2024-12-11 13:44:12.660395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.660667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.660982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:30.119  [2024-12-11 13:44:12.661240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:30.119  [2024-12-11 13:44:12.661619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7009
00:07:30.119  [2024-12-11 13:44:12.661952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=3f1f
00:07:30.119  [2024-12-11 13:44:12.662455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=18b753ed, Actual=1ab753ed
00:07:30.119  [2024-12-11 13:44:12.662731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=bd1478cc, Actual=bf1478cc
00:07:30.119  [2024-12-11 13:44:12.663116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.663710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.663954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:30.119  [2024-12-11 13:44:12.664280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058
00:07:30.119  [2024-12-11 13:44:12.664530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=2b4171a5
00:07:30.119  [2024-12-11 13:44:12.664903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=68eb26d3
00:07:30.119  [2024-12-11 13:44:12.665357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3
00:07:30.119  [2024-12-11 13:44:12.665608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=52f21af6a0e95313, Actual=52f21af6a2e95313
00:07:30.119  [2024-12-11 13:44:12.666072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.666392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=288
00:07:30.119  [2024-12-11 13:44:12.666822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:30.119  [2024-12-11 13:44:12.667158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058
00:07:30.119  [2024-12-11 13:44:12.667540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=4b456d1ed958df20
00:07:30.119  [2024-12-11 13:44:12.667797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=718b5a1b275cc5d9
00:07:30.119  passed
00:07:30.119    Test: set_md_interleave_iovs_test ...passed
00:07:30.119    Test: set_md_interleave_iovs_split_test ...passed
00:07:30.119    Test: dif_generate_stream_pi_16_test ...passed
00:07:30.119    Test: dif_generate_stream_test ...passed
00:07:30.119    Test: set_md_interleave_iovs_alignment_test ...passed
00:07:30.119    Test: dif_generate_split_test ...[2024-12-11 13:44:12.678329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:2193:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur.
00:07:30.119  passed
00:07:30.119    Test: set_md_interleave_iovs_multi_segments_test ...passed
00:07:30.119    Test: dif_verify_split_test ...passed
00:07:30.119    Test: dif_verify_stream_multi_segments_test ...passed
00:07:30.119    Test: update_crc32c_pi_16_test ...passed
00:07:30.119    Test: update_crc32c_test ...passed
00:07:30.119    Test: dif_update_crc32c_split_test ...passed
00:07:30.119    Test: dif_update_crc32c_stream_multi_segments_test ...passed
00:07:30.119    Test: get_range_with_md_test ...passed
00:07:30.119    Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed
00:07:30.119    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed
00:07:30.119    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:30.119    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed
00:07:30.119    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed
00:07:30.119    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:30.119    Test: dif_generate_and_verify_unmap_test ...passed
00:07:30.119    Test: dif_pi_format_check_test ...passed
00:07:30.119    Test: dif_type_check_test ...passed
00:07:30.119  
00:07:30.119  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.119                suites      1      1    n/a      0        0
00:07:30.119                 tests     92     92     92      0        0
00:07:30.119               asserts   3872   3872   3872      0      n/a
00:07:30.119  
00:07:30.119  Elapsed time =    0.408 seconds
00:07:30.119   13:44:12 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut
00:07:30.119  
00:07:30.119  
00:07:30.119       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.119       http://cunit.sourceforge.net/
00:07:30.119  
00:07:30.119  
00:07:30.119  Suite: iov
00:07:30.119    Test: test_single_iov ...passed
00:07:30.119    Test: test_simple_iov ...passed
00:07:30.119    Test: test_complex_iov ...passed
00:07:30.119    Test: test_iovs_to_buf ...passed
00:07:30.119    Test: test_buf_to_iovs ...passed
00:07:30.119    Test: test_memset ...passed
00:07:30.119    Test: test_iov_one ...passed
00:07:30.119    Test: test_iov_xfer ...passed
00:07:30.119  
00:07:30.119  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.119                suites      1      1    n/a      0        0
00:07:30.119                 tests      8      8      8      0        0
00:07:30.119               asserts    156    156    156      0      n/a
00:07:30.119  
00:07:30.119  Elapsed time =    0.000 seconds
00:07:30.119   13:44:12 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut
00:07:30.119  
00:07:30.119  
00:07:30.119       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.119       http://cunit.sourceforge.net/
00:07:30.119  
00:07:30.119  
00:07:30.119  Suite: math
00:07:30.119    Test: test_serial_number_arithmetic ...passed
00:07:30.119  Suite: erase
00:07:30.119    Test: test_memset_s ...passed
00:07:30.119  
00:07:30.119  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.119                suites      2      2    n/a      0        0
00:07:30.119                 tests      2      2      2      0        0
00:07:30.119               asserts     18     18     18      0      n/a
00:07:30.119  
00:07:30.119  Elapsed time =    0.000 seconds
00:07:30.119   13:44:12 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut
00:07:30.119  
00:07:30.119  
00:07:30.119       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.119       http://cunit.sourceforge.net/
00:07:30.119  
00:07:30.119  
00:07:30.119  Suite: pipe
00:07:30.119    Test: test_create_destroy ...passed
00:07:30.119    Test: test_write_get_buffer ...passed
00:07:30.119    Test: test_write_advance ...passed
00:07:30.119    Test: test_read_get_buffer ...passed
00:07:30.119    Test: test_read_advance ...passed
00:07:30.119    Test: test_data ...passed
00:07:30.119  
00:07:30.119  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.119                suites      1      1    n/a      0        0
00:07:30.119                 tests      6      6      6      0        0
00:07:30.119               asserts    251    251    251      0      n/a
00:07:30.119  
00:07:30.119  Elapsed time =    0.000 seconds
00:07:30.119    13:44:12 unittest.unittest_util -- unit/unittest.sh@146 -- # uname -s
00:07:30.119   13:44:12 unittest.unittest_util -- unit/unittest.sh@146 -- # '[' Linux = Linux ']'
00:07:30.119   13:44:12 unittest.unittest_util -- unit/unittest.sh@147 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/fd_group.c/fd_group_ut
00:07:30.119  
00:07:30.119  
00:07:30.119       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.119       http://cunit.sourceforge.net/
00:07:30.120  
00:07:30.120  
00:07:30.120  Suite: fd_group
00:07:30.120    Test: test_fd_group_basic ...passed
00:07:30.120    Test: test_fd_group_nest_unnest ...passed
00:07:30.120    Test: test_fd_group_multi_nest ...passed
00:07:30.120  
00:07:30.120  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.120                suites      1      1    n/a      0        0
00:07:30.120                 tests      3      3      3      0        0
00:07:30.120               asserts    124    124    124      0      n/a
00:07:30.120  
00:07:30.120  Elapsed time =    0.001 seconds
00:07:30.120  
00:07:30.120  real	0m0.848s
00:07:30.120  user	0m0.551s
00:07:30.120  sys	0m0.277s
00:07:30.120   13:44:12 unittest.unittest_util -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.120   13:44:12 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x
00:07:30.120  ************************************
00:07:30.120  END TEST unittest_util
00:07:30.120  ************************************
00:07:30.379   13:44:12 unittest -- unit/unittest.sh@263 -- # [[ y == y ]]
00:07:30.379   13:44:12 unittest -- unit/unittest.sh@264 -- # run_test unittest_fsdev unittest_fsdev
00:07:30.379   13:44:12 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.379   13:44:12 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.379   13:44:12 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:30.379  ************************************
00:07:30.379  START TEST unittest_fsdev
00:07:30.379  ************************************
00:07:30.379   13:44:12 unittest.unittest_fsdev -- common/autotest_common.sh@1129 -- # unittest_fsdev
00:07:30.379   13:44:12 unittest.unittest_fsdev -- unit/unittest.sh@152 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/fsdev/fsdev.c/fsdev_ut
00:07:30.379  
00:07:30.379  
00:07:30.379       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.379       http://cunit.sourceforge.net/
00:07:30.379  
00:07:30.379  
00:07:30.379  Suite: fsdev
00:07:30.379    Test: ut_fsdev_test_open_close ...passed
00:07:30.379    Test: ut_fsdev_test_set_opts ...passed
00:07:30.379    Test: ut_fsdev_test_get_io_channel ...[2024-12-11 13:44:12.967459] fsdev.c: 631:spdk_fsdev_set_opts: *ERROR*: opts cannot be NULL
00:07:30.379  [2024-12-11 13:44:12.967679] fsdev.c: 636:spdk_fsdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:07:30.379  passed
00:07:30.379    Test: ut_fsdev_test_mount_ok ...passed
00:07:30.379    Test: ut_fsdev_test_mount_err ...passed
00:07:30.379    Test: ut_fsdev_test_umount ...passed
00:07:30.379    Test: ut_fsdev_test_lookup_ok ...passed
00:07:30.379    Test: ut_fsdev_test_lookup_err ...passed
00:07:30.379    Test: ut_fsdev_test_forget ...passed
00:07:30.379    Test: ut_fsdev_test_getattr ...passed
00:07:30.379    Test: ut_fsdev_test_setattr ...passed
00:07:30.379    Test: ut_fsdev_test_readlink ...passed
00:07:30.379    Test: ut_fsdev_test_symlink ...passed
00:07:30.379    Test: ut_fsdev_test_mknod ...passed
00:07:30.379    Test: ut_fsdev_test_mkdir ...passed
00:07:30.379    Test: ut_fsdev_test_unlink ...passed
00:07:30.379    Test: ut_fsdev_test_rmdir ...passed
00:07:30.379    Test: ut_fsdev_test_rename ...passed
00:07:30.379    Test: ut_fsdev_test_link ...passed
00:07:30.379    Test: ut_fsdev_test_fopen ...passed
00:07:30.379    Test: ut_fsdev_test_read ...passed
00:07:30.379    Test: ut_fsdev_test_write ...passed
00:07:30.379    Test: ut_fsdev_test_statfs ...passed
00:07:30.379    Test: ut_fsdev_test_release ...passed
00:07:30.379    Test: ut_fsdev_test_fsync ...passed
00:07:30.379    Test: ut_fsdev_test_getxattr ...passed
00:07:30.379    Test: ut_fsdev_test_setxattr ...passed
00:07:30.379    Test: ut_fsdev_test_listxattr ...passed
00:07:30.379    Test: ut_fsdev_test_listxattr_get_size ...passed
00:07:30.379    Test: ut_fsdev_test_removexattr ...passed
00:07:30.379    Test: ut_fsdev_test_flush ...passed
00:07:30.379    Test: ut_fsdev_test_opendir ...passed
00:07:30.379    Test: ut_fsdev_test_readdir ...passed
00:07:30.379    Test: ut_fsdev_test_releasedir ...passed
00:07:30.379    Test: ut_fsdev_test_fsyncdir ...passed
00:07:30.379    Test: ut_fsdev_test_flock ...passed
00:07:30.379    Test: ut_fsdev_test_create ...passed
00:07:30.379    Test: ut_fsdev_test_abort ...passed
00:07:30.379    Test: ut_fsdev_test_fallocate ...passed
00:07:30.379    Test: ut_fsdev_test_copy_file_range ...passed[2024-12-11 13:44:13.015722] fsdev.c: 354:fsdev_mgr_unregister_cb: *ERROR*: fsdev IO pool count is 65535 but should be 131070
00:07:30.379  
00:07:30.379  
00:07:30.379  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.380                suites      1      1    n/a      0        0
00:07:30.380                 tests     40     40     40      0        0
00:07:30.380               asserts   2840   2840   2840      0      n/a
00:07:30.380  
00:07:30.380  Elapsed time =    0.047 seconds
00:07:30.380  
00:07:30.380  real	0m0.104s
00:07:30.380  user	0m0.055s
00:07:30.380  sys	0m0.047s
00:07:30.380   13:44:13 unittest.unittest_fsdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.380  ************************************
00:07:30.380  END TEST unittest_fsdev
00:07:30.380  ************************************
00:07:30.380   13:44:13 unittest.unittest_fsdev -- common/autotest_common.sh@10 -- # set +x
00:07:30.380   13:44:13 unittest -- unit/unittest.sh@266 -- # [[ y == y ]]
00:07:30.380   13:44:13 unittest -- unit/unittest.sh@267 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:30.380   13:44:13 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.380   13:44:13 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.380   13:44:13 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:30.380  ************************************
00:07:30.380  START TEST unittest_vhost
00:07:30.380  ************************************
00:07:30.380   13:44:13 unittest.unittest_vhost -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:30.380  
00:07:30.380  
00:07:30.380       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.380       http://cunit.sourceforge.net/
00:07:30.380  
00:07:30.380  
00:07:30.380  Suite: vhost_suite
00:07:30.380    Test: desc_to_iov_test ...passed
00:07:30.380    Test: create_controller_test ...[2024-12-11 13:44:13.123145] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached
00:07:30.380  [2024-12-11 13:44:13.127474] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:30.380  [2024-12-11 13:44:13.127578] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:07:30.380  [2024-12-11 13:44:13.127702] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:30.380  [2024-12-11 13:44:13.127766] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:07:30.380  [2024-12-11 13:44:13.127799] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 125:vhost_dev_register: *ERROR*: Can't register controller with no name
00:07:30.380  [2024-12-11 13:44:13.128274] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is too long: some_path/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
00:07:30.380  [2024-12-11 13:44:13.129376] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 141:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists.
00:07:30.380  passed
00:07:30.380    Test: session_find_by_vid_test ...passed
00:07:30.380    Test: remove_controller_test ...[2024-12-11 13:44:13.131612] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1889:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection.
00:07:30.380  passed
00:07:30.380    Test: vq_avail_ring_get_test ...passed
00:07:30.380    Test: vq_packed_ring_test ...passed
00:07:30.380    Test: vhost_blk_construct_test ...passed
00:07:30.380  
00:07:30.380  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.380                suites      1      1    n/a      0        0
00:07:30.380                 tests      7      7      7      0        0
00:07:30.380               asserts    147    147    147      0      n/a
00:07:30.380  
00:07:30.380  Elapsed time =    0.013 seconds
00:07:30.380  ************************************
00:07:30.380  END TEST unittest_vhost
00:07:30.380  ************************************
00:07:30.380  
00:07:30.380  real	0m0.052s
00:07:30.380  user	0m0.034s
00:07:30.380  sys	0m0.018s
00:07:30.380   13:44:13 unittest.unittest_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.380   13:44:13 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x
00:07:30.639   13:44:13 unittest -- unit/unittest.sh@269 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:30.639   13:44:13 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.639   13:44:13 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.639   13:44:13 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:30.639  ************************************
00:07:30.639  START TEST unittest_dma
00:07:30.640  ************************************
00:07:30.640   13:44:13 unittest.unittest_dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:30.640  
00:07:30.640  
00:07:30.640       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.640       http://cunit.sourceforge.net/
00:07:30.640  
00:07:30.640  
00:07:30.640  Suite: dma_suite
00:07:30.640    Test: test_dma ...[2024-12-11 13:44:13.217270] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c:  60:spdk_memory_domain_create: *ERROR*: Context size can't be 0
00:07:30.640  passed
00:07:30.640  
00:07:30.640  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.640                suites      1      1    n/a      0        0
00:07:30.640                 tests      1      1      1      0        0
00:07:30.640               asserts     54     54     54      0      n/a
00:07:30.640  
00:07:30.640  Elapsed time =    0.001 seconds
00:07:30.640  
00:07:30.640  real	0m0.032s
00:07:30.640  user	0m0.015s
00:07:30.640  sys	0m0.018s
00:07:30.640   13:44:13 unittest.unittest_dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.640   13:44:13 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x
00:07:30.640  ************************************
00:07:30.640  END TEST unittest_dma
00:07:30.640  ************************************
00:07:30.640   13:44:13 unittest -- unit/unittest.sh@271 -- # run_test unittest_init unittest_init
00:07:30.640   13:44:13 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.640   13:44:13 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.640   13:44:13 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:30.640  ************************************
00:07:30.640  START TEST unittest_init
00:07:30.640  ************************************
00:07:30.640   13:44:13 unittest.unittest_init -- common/autotest_common.sh@1129 -- # unittest_init
00:07:30.640   13:44:13 unittest.unittest_init -- unit/unittest.sh@156 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut
00:07:30.640  
00:07:30.640  
00:07:30.640       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.640       http://cunit.sourceforge.net/
00:07:30.640  
00:07:30.640  
00:07:30.640  Suite: subsystem_suite
00:07:30.640    Test: subsystem_sort_test_depends_on_single ...passed
00:07:30.640    Test: subsystem_sort_test_depends_on_multiple ...passed
00:07:30.640    Test: subsystem_sort_test_missing_dependency ...[2024-12-11 13:44:13.305717] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing
00:07:30.640  passed
00:07:30.640  
00:07:30.640  [2024-12-11 13:44:13.305967] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing
00:07:30.640  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.640                suites      1      1    n/a      0        0
00:07:30.640                 tests      3      3      3      0        0
00:07:30.640               asserts     20     20     20      0      n/a
00:07:30.640  
00:07:30.640  Elapsed time =    0.000 seconds
00:07:30.640  
00:07:30.640  real	0m0.043s
00:07:30.640  user	0m0.025s
00:07:30.640  sys	0m0.019s
00:07:30.640   13:44:13 unittest.unittest_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.640   13:44:13 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x
00:07:30.640  ************************************
00:07:30.640  END TEST unittest_init
00:07:30.640  ************************************
00:07:30.640   13:44:13 unittest -- unit/unittest.sh@272 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:07:30.640   13:44:13 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.640   13:44:13 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.640   13:44:13 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:30.640  ************************************
00:07:30.640  START TEST unittest_keyring
00:07:30.640  ************************************
00:07:30.640   13:44:13 unittest.unittest_keyring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:07:30.640  
00:07:30.640  
00:07:30.640       CUnit - A unit testing framework for C - Version 2.1-3
00:07:30.640       http://cunit.sourceforge.net/
00:07:30.640  
00:07:30.640  
00:07:30.640  Suite: keyring
00:07:30.640    Test: test_keyring_add_remove ...[2024-12-11 13:44:13.401239] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists
00:07:30.640  [2024-12-11 13:44:13.401467] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists
00:07:30.640  [2024-12-11 13:44:13.401502] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 168:spdk_keyring_remove_key: *ERROR*: Key 'key0' is not owned by module 'ut2'
00:07:30.640  passed
00:07:30.640    Test: test_keyring_get_put ...passed[2024-12-11 13:44:13.401533] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key 'key0' does not exist
00:07:30.640  [2024-12-11 13:44:13.401564] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key ':key0' does not exist
00:07:30.640  [2024-12-11 13:44:13.401591] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:07:30.640  
00:07:30.640  
00:07:30.640  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:30.640                suites      1      1    n/a      0        0
00:07:30.640                 tests      2      2      2      0        0
00:07:30.640               asserts     46     46     46      0      n/a
00:07:30.640  
00:07:30.640  Elapsed time =    0.001 seconds
00:07:30.640  
00:07:30.640  real	0m0.032s
00:07:30.640  user	0m0.014s
00:07:30.640  sys	0m0.019s
00:07:30.640   13:44:13 unittest.unittest_keyring -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.640   13:44:13 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x
00:07:30.640  ************************************
00:07:30.640  END TEST unittest_keyring
00:07:30.640  ************************************
00:07:30.899   13:44:13 unittest -- unit/unittest.sh@274 -- # [[ y == y ]]
00:07:30.899    13:44:13 unittest -- unit/unittest.sh@275 -- # hostname
00:07:30.899   13:44:13 unittest -- unit/unittest.sh@275 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:07:30.899  geninfo: WARNING: invalid characters removed from testname!
00:08:09.633   13:44:50 unittest -- unit/unittest.sh@276 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info
00:08:13.819   13:44:56 unittest -- unit/unittest.sh@277 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:17.100   13:44:59 unittest -- unit/unittest.sh@278 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:19.630   13:45:02 unittest -- unit/unittest.sh@279 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:22.917   13:45:05 unittest -- unit/unittest.sh@280 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:25.453   13:45:07 unittest -- unit/unittest.sh@281 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:27.986   13:45:10 unittest -- unit/unittest.sh@282 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:08:27.986   13:45:10 unittest -- unit/unittest.sh@283 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:28.924  Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:28.924  Found 338 entries.
00:08:28.924  Found common filename prefix "/home/vagrant/spdk_repo/spdk"
00:08:28.924  Writing .css and .png files.
00:08:28.924  Generating output.
00:08:28.924  Processing file include/linux/virtio_ring.h
00:08:29.182  Processing file include/spdk/base64.h
00:08:29.182  Processing file include/spdk/trace.h
00:08:29.182  Processing file include/spdk/fsdev_module.h
00:08:29.182  Processing file include/spdk/histogram_data.h
00:08:29.182  Processing file include/spdk/endian.h
00:08:29.182  Processing file include/spdk/nvmf_transport.h
00:08:29.182  Processing file include/spdk/nvme.h
00:08:29.182  Processing file include/spdk/util.h
00:08:29.182  Processing file include/spdk/bdev_module.h
00:08:29.182  Processing file include/spdk/mmio.h
00:08:29.182  Processing file include/spdk/nvme_spec.h
00:08:29.182  Processing file include/spdk/thread.h
00:08:29.182  Processing file include/spdk_internal/virtio.h
00:08:29.182  Processing file include/spdk_internal/rdma_utils.h
00:08:29.183  Processing file include/spdk_internal/utf.h
00:08:29.183  Processing file include/spdk_internal/sock.h
00:08:29.183  Processing file include/spdk_internal/nvme_tcp.h
00:08:29.183  Processing file include/spdk_internal/sgl.h
00:08:29.442  Processing file lib/accel/accel.c
00:08:29.442  Processing file lib/accel/accel_sw.c
00:08:29.442  Processing file lib/accel/accel_rpc.c
00:08:29.442  Processing file lib/bdev/bdev.c
00:08:29.442  Processing file lib/bdev/part.c
00:08:29.442  Processing file lib/bdev/scsi_nvme.c
00:08:29.442  Processing file lib/bdev/bdev_rpc.c
00:08:29.442  Processing file lib/bdev/bdev_zone.c
00:08:29.700  Processing file lib/blob/request.c
00:08:29.700  Processing file lib/blob/blobstore.c
00:08:29.700  Processing file lib/blob/blobstore.h
00:08:29.700  Processing file lib/blob/zeroes.c
00:08:29.700  Processing file lib/blob/blob_bs_dev.c
00:08:29.959  Processing file lib/blobfs/tree.c
00:08:29.959  Processing file lib/blobfs/blobfs.c
00:08:29.959  Processing file lib/conf/conf.c
00:08:29.959  Processing file lib/dma/dma.c
00:08:30.218  Processing file lib/env_dpdk/init.c
00:08:30.218  Processing file lib/env_dpdk/sigbus_handler.c
00:08:30.218  Processing file lib/env_dpdk/pci.c
00:08:30.218  Processing file lib/env_dpdk/pci_virtio.c
00:08:30.218  Processing file lib/env_dpdk/threads.c
00:08:30.218  Processing file lib/env_dpdk/pci_vmd.c
00:08:30.218  Processing file lib/env_dpdk/env.c
00:08:30.218  Processing file lib/env_dpdk/memory.c
00:08:30.218  Processing file lib/env_dpdk/pci_dpdk.c
00:08:30.218  Processing file lib/env_dpdk/pci_dpdk_2211.c
00:08:30.218  Processing file lib/env_dpdk/pci_idxd.c
00:08:30.218  Processing file lib/env_dpdk/pci_event.c
00:08:30.218  Processing file lib/env_dpdk/pci_ioat.c
00:08:30.218  Processing file lib/env_dpdk/pci_dpdk_2207.c
00:08:30.218  Processing file lib/event/app_rpc.c
00:08:30.218  Processing file lib/event/scheduler_static.c
00:08:30.218  Processing file lib/event/log_rpc.c
00:08:30.218  Processing file lib/event/app.c
00:08:30.218  Processing file lib/event/reactor.c
00:08:30.477  Processing file lib/fsdev/fsdev_rpc.c
00:08:30.477  Processing file lib/fsdev/fsdev_io.c
00:08:30.477  Processing file lib/fsdev/fsdev.c
00:08:30.735  Processing file lib/ftl/ftl_core.h
00:08:30.735  Processing file lib/ftl/ftl_debug.c
00:08:30.735  Processing file lib/ftl/ftl_core.c
00:08:30.735  Processing file lib/ftl/ftl_layout.c
00:08:30.735  Processing file lib/ftl/ftl_io.c
00:08:30.735  Processing file lib/ftl/ftl_l2p_cache.c
00:08:30.735  Processing file lib/ftl/ftl_nv_cache.c
00:08:30.735  Processing file lib/ftl/ftl_trace.c
00:08:30.735  Processing file lib/ftl/ftl_nv_cache_io.h
00:08:30.735  Processing file lib/ftl/ftl_band.c
00:08:30.735  Processing file lib/ftl/ftl_nv_cache.h
00:08:30.735  Processing file lib/ftl/ftl_band.h
00:08:30.735  Processing file lib/ftl/ftl_band_ops.c
00:08:30.735  Processing file lib/ftl/ftl_rq.c
00:08:30.735  Processing file lib/ftl/ftl_sb.c
00:08:30.735  Processing file lib/ftl/ftl_writer.c
00:08:30.735  Processing file lib/ftl/ftl_l2p.c
00:08:30.735  Processing file lib/ftl/ftl_io.h
00:08:30.735  Processing file lib/ftl/ftl_p2l.c
00:08:30.735  Processing file lib/ftl/ftl_init.c
00:08:30.735  Processing file lib/ftl/ftl_reloc.c
00:08:30.735  Processing file lib/ftl/ftl_debug.h
00:08:30.735  Processing file lib/ftl/ftl_p2l_log.c
00:08:30.735  Processing file lib/ftl/ftl_writer.h
00:08:30.736  Processing file lib/ftl/ftl_l2p_flat.c
00:08:30.994  Processing file lib/ftl/base/ftl_base_dev.c
00:08:30.994  Processing file lib/ftl/base/ftl_base_bdev.c
00:08:30.994  Processing file lib/ftl/mngt/ftl_mngt_misc.c
00:08:30.994  Processing file lib/ftl/mngt/ftl_mngt_upgrade.c
00:08:30.994  Processing file lib/ftl/mngt/ftl_mngt_band.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_bdev.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_recovery.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_l2p.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_md.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_p2l.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_shutdown.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_startup.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_ioch.c
00:08:30.995  Processing file lib/ftl/mngt/ftl_mngt_self_test.c
00:08:31.254  Processing file lib/ftl/nvc/ftl_nvc_dev.c
00:08:31.254  Processing file lib/ftl/nvc/ftl_nvc_bdev_common.c
00:08:31.254  Processing file lib/ftl/nvc/ftl_nvc_bdev_non_vss.c
00:08:31.254  Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_trim_upgrade.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_layout_upgrade.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_band_upgrade.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_sb_upgrade.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_sb_v5.c
00:08:31.254  Processing file lib/ftl/upgrade/ftl_sb_v3.c
00:08:31.513  Processing file lib/ftl/utils/ftl_conf.c
00:08:31.513  Processing file lib/ftl/utils/ftl_df.h
00:08:31.513  Processing file lib/ftl/utils/ftl_property.c
00:08:31.513  Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c
00:08:31.513  Processing file lib/ftl/utils/ftl_md.c
00:08:31.513  Processing file lib/ftl/utils/ftl_mempool.c
00:08:31.513  Processing file lib/ftl/utils/ftl_property.h
00:08:31.513  Processing file lib/ftl/utils/ftl_addr_utils.h
00:08:31.513  Processing file lib/ftl/utils/ftl_bitmap.c
00:08:31.513  Processing file lib/fuse_dispatcher/fuse_dispatcher.c
00:08:31.513  Processing file lib/idxd/idxd_kernel.c
00:08:31.513  Processing file lib/idxd/idxd_internal.h
00:08:31.513  Processing file lib/idxd/idxd.c
00:08:31.513  Processing file lib/idxd/idxd_user.c
00:08:31.771  Processing file lib/init/subsystem_rpc.c
00:08:31.771  Processing file lib/init/subsystem.c
00:08:31.771  Processing file lib/init/json_config.c
00:08:31.771  Processing file lib/init/rpc.c
00:08:31.771  Processing file lib/ioat/ioat_internal.h
00:08:31.771  Processing file lib/ioat/ioat.c
00:08:32.029  Processing file lib/iscsi/task.c
00:08:32.029  Processing file lib/iscsi/init_grp.c
00:08:32.029  Processing file lib/iscsi/iscsi.c
00:08:32.029  Processing file lib/iscsi/portal_grp.c
00:08:32.029  Processing file lib/iscsi/tgt_node.c
00:08:32.029  Processing file lib/iscsi/iscsi.h
00:08:32.029  Processing file lib/iscsi/conn.c
00:08:32.029  Processing file lib/iscsi/task.h
00:08:32.029  Processing file lib/iscsi/iscsi_rpc.c
00:08:32.029  Processing file lib/iscsi/param.c
00:08:32.029  Processing file lib/iscsi/iscsi_subsystem.c
00:08:32.288  Processing file lib/json/json_parse.c
00:08:32.288  Processing file lib/json/json_write.c
00:08:32.288  Processing file lib/json/json_util.c
00:08:32.288  Processing file lib/jsonrpc/jsonrpc_server_tcp.c
00:08:32.288  Processing file lib/jsonrpc/jsonrpc_client.c
00:08:32.288  Processing file lib/jsonrpc/jsonrpc_client_tcp.c
00:08:32.288  Processing file lib/jsonrpc/jsonrpc_server.c
00:08:32.288  Processing file lib/keyring/keyring_rpc.c
00:08:32.288  Processing file lib/keyring/keyring.c
00:08:32.288  Processing file lib/log/log_deprecated.c
00:08:32.288  Processing file lib/log/log_flags.c
00:08:32.288  Processing file lib/log/log.c
00:08:32.545  Processing file lib/lvol/lvol.c
00:08:32.545  Processing file lib/nbd/nbd.c
00:08:32.545  Processing file lib/nbd/nbd_rpc.c
00:08:32.545  Processing file lib/notify/notify_rpc.c
00:08:32.545  Processing file lib/notify/notify.c
00:08:33.112  Processing file lib/nvme/nvme_poll_group.c
00:08:33.112  Processing file lib/nvme/nvme_pcie_common.c
00:08:33.112  Processing file lib/nvme/nvme_qpair.c
00:08:33.112  Processing file lib/nvme/nvme_internal.h
00:08:33.112  Processing file lib/nvme/nvme_ctrlr_cmd.c
00:08:33.112  Processing file lib/nvme/nvme_pcie_internal.h
00:08:33.113  Processing file lib/nvme/nvme_ctrlr.c
00:08:33.113  Processing file lib/nvme/nvme_ns_cmd.c
00:08:33.113  Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c
00:08:33.113  Processing file lib/nvme/nvme_zns.c
00:08:33.113  Processing file lib/nvme/nvme_cuse.c
00:08:33.113  Processing file lib/nvme/nvme_rdma.c
00:08:33.113  Processing file lib/nvme/nvme_transport.c
00:08:33.113  Processing file lib/nvme/nvme_fabric.c
00:08:33.113  Processing file lib/nvme/nvme_ns.c
00:08:33.113  Processing file lib/nvme/nvme_opal.c
00:08:33.113  Processing file lib/nvme/nvme_io_msg.c
00:08:33.113  Processing file lib/nvme/nvme_discovery.c
00:08:33.113  Processing file lib/nvme/nvme_auth.c
00:08:33.113  Processing file lib/nvme/nvme_tcp.c
00:08:33.113  Processing file lib/nvme/nvme_pcie.c
00:08:33.113  Processing file lib/nvme/nvme_ns_ocssd_cmd.c
00:08:33.113  Processing file lib/nvme/nvme_quirks.c
00:08:33.113  Processing file lib/nvme/nvme.c
00:08:33.679  Processing file lib/nvmf/tcp.c
00:08:33.679  Processing file lib/nvmf/nvmf_rpc.c
00:08:33.679  Processing file lib/nvmf/ctrlr.c
00:08:33.679  Processing file lib/nvmf/stubs.c
00:08:33.679  Processing file lib/nvmf/subsystem.c
00:08:33.679  Processing file lib/nvmf/nvmf.c
00:08:33.679  Processing file lib/nvmf/ctrlr_discovery.c
00:08:33.679  Processing file lib/nvmf/ctrlr_bdev.c
00:08:33.679  Processing file lib/nvmf/auth.c
00:08:33.679  Processing file lib/nvmf/transport.c
00:08:33.679  Processing file lib/nvmf/nvmf_internal.h
00:08:33.679  Processing file lib/nvmf/rdma.c
00:08:33.679  Processing file lib/rdma_provider/rdma_provider_verbs.c
00:08:33.679  Processing file lib/rdma_provider/common.c
00:08:33.679  Processing file lib/rdma_utils/rdma_utils.c
00:08:33.679  Processing file lib/rpc/rpc.c
00:08:33.937  Processing file lib/scsi/task.c
00:08:33.937  Processing file lib/scsi/scsi_rpc.c
00:08:33.937  Processing file lib/scsi/scsi.c
00:08:33.937  Processing file lib/scsi/scsi_pr.c
00:08:33.937  Processing file lib/scsi/dev.c
00:08:33.937  Processing file lib/scsi/port.c
00:08:33.937  Processing file lib/scsi/scsi_bdev.c
00:08:33.937  Processing file lib/scsi/lun.c
00:08:33.937  Processing file lib/sock/sock_rpc.c
00:08:33.937  Processing file lib/sock/sock.c
00:08:33.937  Processing file lib/thread/thread.c
00:08:33.937  Processing file lib/thread/iobuf.c
00:08:34.197  Processing file lib/trace/trace.c
00:08:34.197  Processing file lib/trace/trace_rpc.c
00:08:34.197  Processing file lib/trace/trace_flags.c
00:08:34.197  Processing file lib/trace_parser/trace.cpp
00:08:34.197  Processing file lib/ublk/ublk_rpc.c
00:08:34.197  Processing file lib/ublk/ublk.c
00:08:34.197  Processing file lib/ut/ut.c
00:08:34.197  Processing file lib/ut_mock/mock.c
00:08:34.824  Processing file lib/util/crc64.c
00:08:34.824  Processing file lib/util/math.c
00:08:34.824  Processing file lib/util/crc32_ieee.c
00:08:34.824  Processing file lib/util/pipe.c
00:08:34.824  Processing file lib/util/hexlify.c
00:08:34.824  Processing file lib/util/crc32c.c
00:08:34.824  Processing file lib/util/md5.c
00:08:34.824  Processing file lib/util/zipf.c
00:08:34.824  Processing file lib/util/crc16.c
00:08:34.824  Processing file lib/util/crc32.c
00:08:34.824  Processing file lib/util/strerror_tls.c
00:08:34.824  Processing file lib/util/dif.c
00:08:34.824  Processing file lib/util/file.c
00:08:34.824  Processing file lib/util/fd.c
00:08:34.824  Processing file lib/util/string.c
00:08:34.824  Processing file lib/util/net.c
00:08:34.824  Processing file lib/util/uuid.c
00:08:34.824  Processing file lib/util/cpuset.c
00:08:34.824  Processing file lib/util/iov.c
00:08:34.824  Processing file lib/util/bit_array.c
00:08:34.824  Processing file lib/util/xor.c
00:08:34.824  Processing file lib/util/base64.c
00:08:34.824  Processing file lib/util/fd_group.c
00:08:34.824  Processing file lib/vfio_user/host/vfio_user_pci.c
00:08:34.824  Processing file lib/vfio_user/host/vfio_user.c
00:08:34.824  Processing file lib/vhost/rte_vhost_user.c
00:08:34.824  Processing file lib/vhost/vhost_scsi.c
00:08:34.824  Processing file lib/vhost/vhost.c
00:08:34.824  Processing file lib/vhost/vhost_blk.c
00:08:34.824  Processing file lib/vhost/vhost_rpc.c
00:08:34.824  Processing file lib/vhost/vhost_internal.h
00:08:35.083  Processing file lib/virtio/virtio_pci.c
00:08:35.083  Processing file lib/virtio/virtio_vhost_user.c
00:08:35.083  Processing file lib/virtio/virtio.c
00:08:35.083  Processing file lib/virtio/virtio_vfio_user.c
00:08:35.083  Processing file lib/vmd/led.c
00:08:35.083  Processing file lib/vmd/vmd.c
00:08:35.083  Processing file module/accel/dsa/accel_dsa.c
00:08:35.083  Processing file module/accel/dsa/accel_dsa_rpc.c
00:08:35.083  Processing file module/accel/error/accel_error.c
00:08:35.083  Processing file module/accel/error/accel_error_rpc.c
00:08:35.341  Processing file module/accel/iaa/accel_iaa.c
00:08:35.341  Processing file module/accel/iaa/accel_iaa_rpc.c
00:08:35.341  Processing file module/accel/ioat/accel_ioat_rpc.c
00:08:35.341  Processing file module/accel/ioat/accel_ioat.c
00:08:35.341  Processing file module/bdev/aio/bdev_aio_rpc.c
00:08:35.341  Processing file module/bdev/aio/bdev_aio.c
00:08:35.341  Processing file module/bdev/delay/vbdev_delay.c
00:08:35.341  Processing file module/bdev/delay/vbdev_delay_rpc.c
00:08:35.341  Processing file module/bdev/error/vbdev_error_rpc.c
00:08:35.341  Processing file module/bdev/error/vbdev_error.c
00:08:35.600  Processing file module/bdev/ftl/bdev_ftl_rpc.c
00:08:35.600  Processing file module/bdev/ftl/bdev_ftl.c
00:08:35.600  Processing file module/bdev/gpt/gpt.h
00:08:35.600  Processing file module/bdev/gpt/vbdev_gpt.c
00:08:35.600  Processing file module/bdev/gpt/gpt.c
00:08:35.600  Processing file module/bdev/iscsi/bdev_iscsi.c
00:08:35.600  Processing file module/bdev/iscsi/bdev_iscsi_rpc.c
00:08:35.866  Processing file module/bdev/lvol/vbdev_lvol_rpc.c
00:08:35.866  Processing file module/bdev/lvol/vbdev_lvol.c
00:08:35.866  Processing file module/bdev/malloc/bdev_malloc_rpc.c
00:08:35.866  Processing file module/bdev/malloc/bdev_malloc.c
00:08:35.866  Processing file module/bdev/null/bdev_null_rpc.c
00:08:35.866  Processing file module/bdev/null/bdev_null.c
00:08:36.129  Processing file module/bdev/nvme/nvme_rpc.c
00:08:36.129  Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c
00:08:36.129  Processing file module/bdev/nvme/bdev_mdns_client.c
00:08:36.129  Processing file module/bdev/nvme/vbdev_opal_rpc.c
00:08:36.129  Processing file module/bdev/nvme/bdev_nvme_rpc.c
00:08:36.129  Processing file module/bdev/nvme/bdev_nvme.c
00:08:36.129  Processing file module/bdev/nvme/vbdev_opal.c
00:08:36.129  Processing file module/bdev/passthru/vbdev_passthru.c
00:08:36.129  Processing file module/bdev/passthru/vbdev_passthru_rpc.c
00:08:36.387  Processing file module/bdev/raid/bdev_raid_rpc.c
00:08:36.387  Processing file module/bdev/raid/bdev_raid.h
00:08:36.387  Processing file module/bdev/raid/raid0.c
00:08:36.387  Processing file module/bdev/raid/concat.c
00:08:36.387  Processing file module/bdev/raid/raid1.c
00:08:36.387  Processing file module/bdev/raid/bdev_raid.c
00:08:36.387  Processing file module/bdev/raid/bdev_raid_sb.c
00:08:36.387  Processing file module/bdev/split/vbdev_split.c
00:08:36.387  Processing file module/bdev/split/vbdev_split_rpc.c
00:08:36.646  Processing file module/bdev/virtio/bdev_virtio_rpc.c
00:08:36.646  Processing file module/bdev/virtio/bdev_virtio_blk.c
00:08:36.646  Processing file module/bdev/virtio/bdev_virtio_scsi.c
00:08:36.646  Processing file module/bdev/zone_block/vbdev_zone_block.c
00:08:36.646  Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c
00:08:36.646  Processing file module/blob/bdev/blob_bdev.c
00:08:36.646  Processing file module/blobfs/bdev/blobfs_bdev.c
00:08:36.646  Processing file module/blobfs/bdev/blobfs_bdev_rpc.c
00:08:36.646  Processing file module/env_dpdk/env_dpdk_rpc.c
00:08:36.904  Processing file module/event/subsystems/accel/accel.c
00:08:36.904  Processing file module/event/subsystems/bdev/bdev.c
00:08:36.904  Processing file module/event/subsystems/fsdev/fsdev.c
00:08:36.904  Processing file module/event/subsystems/iobuf/iobuf.c
00:08:36.904  Processing file module/event/subsystems/iobuf/iobuf_rpc.c
00:08:36.904  Processing file module/event/subsystems/iscsi/iscsi.c
00:08:36.904  Processing file module/event/subsystems/keyring/keyring.c
00:08:36.904  Processing file module/event/subsystems/nbd/nbd.c
00:08:37.163  Processing file module/event/subsystems/nvmf/nvmf_rpc.c
00:08:37.163  Processing file module/event/subsystems/nvmf/nvmf_tgt.c
00:08:37.163  Processing file module/event/subsystems/scheduler/scheduler.c
00:08:37.163  Processing file module/event/subsystems/scsi/scsi.c
00:08:37.163  Processing file module/event/subsystems/sock/sock.c
00:08:37.163  Processing file module/event/subsystems/ublk/ublk.c
00:08:37.163  Processing file module/event/subsystems/vhost_blk/vhost_blk.c
00:08:37.421  Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c
00:08:37.421  Processing file module/event/subsystems/vmd/vmd.c
00:08:37.421  Processing file module/event/subsystems/vmd/vmd_rpc.c
00:08:37.421  Processing file module/fsdev/aio/linux_aio_mgr.c
00:08:37.421  Processing file module/fsdev/aio/fsdev_aio_rpc.c
00:08:37.421  Processing file module/fsdev/aio/fsdev_aio.c
00:08:37.421  Processing file module/keyring/file/keyring_rpc.c
00:08:37.421  Processing file module/keyring/file/keyring.c
00:08:37.679  Processing file module/keyring/linux/keyring.c
00:08:37.679  Processing file module/keyring/linux/keyring_rpc.c
00:08:37.679  Processing file module/scheduler/dpdk_governor/dpdk_governor.c
00:08:37.679  Processing file module/scheduler/dynamic/scheduler_dynamic.c
00:08:37.679  Processing file module/scheduler/gscheduler/gscheduler.c
00:08:37.679  Processing file module/sock/posix/posix.c
00:08:37.679  Writing directory view page.
00:08:37.679  Overall coverage rate:
00:08:37.679    lines......: 37.2% (42445 of 114084 lines)
00:08:37.679    functions..: 40.9% (3931 of 9613 functions)
00:08:37.679  Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:37.679  
00:08:37.679  
00:08:37.679  =====================
00:08:37.679  All unit tests passed
00:08:37.679  =====================
00:08:37.679  
00:08:37.679  
00:08:37.679   13:45:20 unittest -- unit/unittest.sh@284 -- # echo 'Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage'
00:08:37.679   13:45:20 unittest -- unit/unittest.sh@287 -- # set +x
00:08:37.679  
00:08:37.679  real	2m48.557s
00:08:37.679  user	2m23.516s
00:08:37.679  sys	0m17.711s
00:08:37.679   13:45:20 unittest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:37.679   13:45:20 unittest -- common/autotest_common.sh@10 -- # set +x
00:08:37.679  ************************************
00:08:37.679  END TEST unittest
00:08:37.679  ************************************
00:08:37.938   13:45:20  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:08:37.938   13:45:20  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:37.938   13:45:20  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:37.938   13:45:20  -- spdk/autotest.sh@149 -- # timing_enter lib
00:08:37.938   13:45:20  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:37.938   13:45:20  -- common/autotest_common.sh@10 -- # set +x
00:08:37.938   13:45:20  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:08:37.938   13:45:20  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:37.938   13:45:20  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:37.938   13:45:20  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:37.938   13:45:20  -- common/autotest_common.sh@10 -- # set +x
00:08:37.938  ************************************
00:08:37.938  START TEST env
00:08:37.938  ************************************
00:08:37.938   13:45:20 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:37.938  * Looking for test storage...
00:08:37.938  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:37.938     13:45:20 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:37.938     13:45:20 env -- common/autotest_common.sh@1711 -- # lcov --version
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:37.938    13:45:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:37.938    13:45:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:37.938    13:45:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:37.938    13:45:20 env -- scripts/common.sh@336 -- # IFS=.-:
00:08:37.938    13:45:20 env -- scripts/common.sh@336 -- # read -ra ver1
00:08:37.938    13:45:20 env -- scripts/common.sh@337 -- # IFS=.-:
00:08:37.938    13:45:20 env -- scripts/common.sh@337 -- # read -ra ver2
00:08:37.938    13:45:20 env -- scripts/common.sh@338 -- # local 'op=<'
00:08:37.938    13:45:20 env -- scripts/common.sh@340 -- # ver1_l=2
00:08:37.938    13:45:20 env -- scripts/common.sh@341 -- # ver2_l=1
00:08:37.938    13:45:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:37.938    13:45:20 env -- scripts/common.sh@344 -- # case "$op" in
00:08:37.938    13:45:20 env -- scripts/common.sh@345 -- # : 1
00:08:37.938    13:45:20 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:37.938    13:45:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:37.938     13:45:20 env -- scripts/common.sh@365 -- # decimal 1
00:08:37.938     13:45:20 env -- scripts/common.sh@353 -- # local d=1
00:08:37.938     13:45:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:37.938     13:45:20 env -- scripts/common.sh@355 -- # echo 1
00:08:37.938    13:45:20 env -- scripts/common.sh@365 -- # ver1[v]=1
00:08:37.938     13:45:20 env -- scripts/common.sh@366 -- # decimal 2
00:08:37.938     13:45:20 env -- scripts/common.sh@353 -- # local d=2
00:08:37.938     13:45:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:37.938     13:45:20 env -- scripts/common.sh@355 -- # echo 2
00:08:37.938    13:45:20 env -- scripts/common.sh@366 -- # ver2[v]=2
00:08:37.938    13:45:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:37.938    13:45:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:37.938    13:45:20 env -- scripts/common.sh@368 -- # return 0
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:37.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:37.938  		--rc genhtml_branch_coverage=1
00:08:37.938  		--rc genhtml_function_coverage=1
00:08:37.938  		--rc genhtml_legend=1
00:08:37.938  		--rc geninfo_all_blocks=1
00:08:37.938  		--rc geninfo_unexecuted_blocks=1
00:08:37.938  		
00:08:37.938  		'
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:37.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:37.938  		--rc genhtml_branch_coverage=1
00:08:37.938  		--rc genhtml_function_coverage=1
00:08:37.938  		--rc genhtml_legend=1
00:08:37.938  		--rc geninfo_all_blocks=1
00:08:37.938  		--rc geninfo_unexecuted_blocks=1
00:08:37.938  		
00:08:37.938  		'
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:37.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:37.938  		--rc genhtml_branch_coverage=1
00:08:37.938  		--rc genhtml_function_coverage=1
00:08:37.938  		--rc genhtml_legend=1
00:08:37.938  		--rc geninfo_all_blocks=1
00:08:37.938  		--rc geninfo_unexecuted_blocks=1
00:08:37.938  		
00:08:37.938  		'
00:08:37.938    13:45:20 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:37.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:37.938  		--rc genhtml_branch_coverage=1
00:08:37.938  		--rc genhtml_function_coverage=1
00:08:37.938  		--rc genhtml_legend=1
00:08:37.938  		--rc geninfo_all_blocks=1
00:08:37.938  		--rc geninfo_unexecuted_blocks=1
00:08:37.938  		
00:08:37.938  		'
00:08:37.938   13:45:20 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:37.938   13:45:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:37.938   13:45:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:37.938   13:45:20 env -- common/autotest_common.sh@10 -- # set +x
00:08:37.938  ************************************
00:08:37.938  START TEST env_memory
00:08:37.938  ************************************
00:08:37.938   13:45:20 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:37.938  
00:08:37.938  
00:08:37.938       CUnit - A unit testing framework for C - Version 2.1-3
00:08:37.938       http://cunit.sourceforge.net/
00:08:37.938  
00:08:37.938  
00:08:37.938  Suite: memory
00:08:38.197    Test: alloc and free memory map ...[2024-12-11 13:45:20.780763] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:08:38.197  passed
00:08:38.197    Test: mem map translation ...[2024-12-11 13:45:20.850125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:08:38.197  [2024-12-11 13:45:20.850210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:08:38.197  [2024-12-11 13:45:20.850345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:08:38.197  [2024-12-11 13:45:20.850380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:08:38.197  passed
00:08:38.197    Test: mem map registration ...[2024-12-11 13:45:20.933181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:08:38.197  [2024-12-11 13:45:20.933272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:08:38.197  passed
00:08:38.455    Test: mem map adjacent registrations ...passed
00:08:38.455  
00:08:38.455  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:38.455                suites      1      1    n/a      0        0
00:08:38.455                 tests      4      4      4      0        0
00:08:38.455               asserts    152    152    152      0      n/a
00:08:38.455  
00:08:38.455  Elapsed time =    0.330 seconds
00:08:38.455  
00:08:38.455  real	0m0.370s
00:08:38.455  user	0m0.340s
00:08:38.455  sys	0m0.031s
00:08:38.455   13:45:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.455   13:45:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:08:38.455  ************************************
00:08:38.455  END TEST env_memory
00:08:38.455  ************************************
00:08:38.455   13:45:21 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:38.455   13:45:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:38.455   13:45:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:38.455   13:45:21 env -- common/autotest_common.sh@10 -- # set +x
00:08:38.455  ************************************
00:08:38.455  START TEST env_vtophys
00:08:38.455  ************************************
00:08:38.455   13:45:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:38.455  EAL: lib.eal log level changed from notice to debug
00:08:38.455  EAL: Detected lcore 0 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 1 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 2 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 3 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 4 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 5 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 6 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 7 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 8 as core 0 on socket 0
00:08:38.455  EAL: Detected lcore 9 as core 0 on socket 0
00:08:38.455  EAL: Maximum logical cores by configuration: 128
00:08:38.455  EAL: Detected CPU lcores: 10
00:08:38.455  EAL: Detected NUMA nodes: 1
00:08:38.455  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:08:38.455  EAL: Checking presence of .so 'librte_eal.so.24'
00:08:38.455  EAL: Checking presence of .so 'librte_eal.so'
00:08:38.455  EAL: Detected static linkage of DPDK
00:08:38.455  EAL: No shared files mode enabled, IPC will be disabled
00:08:38.714  EAL: Selected IOVA mode 'PA'
00:08:38.714  EAL: Probing VFIO support...
00:08:38.714  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:08:38.714  EAL: VFIO modules not loaded, skipping VFIO support...
00:08:38.714  EAL: Ask a virtual area of 0x2e000 bytes
00:08:38.714  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:08:38.714  EAL: Setting up physically contiguous memory...
00:08:38.714  EAL: Setting maximum number of open files to 1048576
00:08:38.714  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:08:38.715  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:08:38.715  EAL: Ask a virtual area of 0x61000 bytes
00:08:38.715  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:08:38.715  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:38.715  EAL: Ask a virtual area of 0x400000000 bytes
00:08:38.715  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:08:38.715  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:08:38.715  EAL: Ask a virtual area of 0x61000 bytes
00:08:38.715  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:08:38.715  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:38.715  EAL: Ask a virtual area of 0x400000000 bytes
00:08:38.715  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:08:38.715  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:08:38.715  EAL: Ask a virtual area of 0x61000 bytes
00:08:38.715  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:08:38.715  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:38.715  EAL: Ask a virtual area of 0x400000000 bytes
00:08:38.715  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:08:38.715  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:08:38.715  EAL: Ask a virtual area of 0x61000 bytes
00:08:38.715  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:08:38.715  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:38.715  EAL: Ask a virtual area of 0x400000000 bytes
00:08:38.715  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:08:38.715  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:08:38.715  EAL: Hugepages will be freed exactly as allocated.
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: TSC frequency is ~2100000 KHz
00:08:38.715  EAL: Main lcore 0 is ready (tid=7c92f4c2ca80;cpuset=[0])
00:08:38.715  EAL: Trying to obtain current memory policy.
00:08:38.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:38.715  EAL: Restoring previous memory policy: 0
00:08:38.715  EAL: request: mp_malloc_sync
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: Heap on socket 0 was expanded by 2MB
00:08:38.715  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:08:38.715  EAL: Mem event callback 'spdk:(nil)' registered
00:08:38.715  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:08:38.715  
00:08:38.715  
00:08:38.715       CUnit - A unit testing framework for C - Version 2.1-3
00:08:38.715       http://cunit.sourceforge.net/
00:08:38.715  
00:08:38.715  
00:08:38.715  Suite: components_suite
00:08:38.715    Test: vtophys_malloc_test ...passed
00:08:38.715    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:08:38.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:38.715  EAL: Restoring previous memory policy: 4
00:08:38.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.715  EAL: request: mp_malloc_sync
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: Heap on socket 0 was expanded by 4MB
00:08:38.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.715  EAL: request: mp_malloc_sync
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: Heap on socket 0 was shrunk by 4MB
00:08:38.715  EAL: Trying to obtain current memory policy.
00:08:38.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:38.715  EAL: Restoring previous memory policy: 4
00:08:38.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.715  EAL: request: mp_malloc_sync
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: Heap on socket 0 was expanded by 6MB
00:08:38.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.715  EAL: request: mp_malloc_sync
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: Heap on socket 0 was shrunk by 6MB
00:08:38.715  EAL: Trying to obtain current memory policy.
00:08:38.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:38.715  EAL: Restoring previous memory policy: 4
00:08:38.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.715  EAL: request: mp_malloc_sync
00:08:38.715  EAL: No shared files mode enabled, IPC is disabled
00:08:38.715  EAL: Heap on socket 0 was expanded by 10MB
00:08:38.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.974  EAL: request: mp_malloc_sync
00:08:38.974  EAL: No shared files mode enabled, IPC is disabled
00:08:38.974  EAL: Heap on socket 0 was shrunk by 10MB
00:08:38.974  EAL: Trying to obtain current memory policy.
00:08:38.974  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:38.974  EAL: Restoring previous memory policy: 4
00:08:38.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.974  EAL: request: mp_malloc_sync
00:08:38.974  EAL: No shared files mode enabled, IPC is disabled
00:08:38.974  EAL: Heap on socket 0 was expanded by 18MB
00:08:38.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.974  EAL: request: mp_malloc_sync
00:08:38.974  EAL: No shared files mode enabled, IPC is disabled
00:08:38.974  EAL: Heap on socket 0 was shrunk by 18MB
00:08:38.974  EAL: Trying to obtain current memory policy.
00:08:38.974  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:38.974  EAL: Restoring previous memory policy: 4
00:08:38.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.974  EAL: request: mp_malloc_sync
00:08:38.974  EAL: No shared files mode enabled, IPC is disabled
00:08:38.974  EAL: Heap on socket 0 was expanded by 34MB
00:08:38.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:38.974  EAL: request: mp_malloc_sync
00:08:38.974  EAL: No shared files mode enabled, IPC is disabled
00:08:38.974  EAL: Heap on socket 0 was shrunk by 34MB
00:08:38.974  EAL: Trying to obtain current memory policy.
00:08:38.974  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:39.234  EAL: Restoring previous memory policy: 4
00:08:39.234  EAL: Calling mem event callback 'spdk:(nil)'
00:08:39.234  EAL: request: mp_malloc_sync
00:08:39.234  EAL: No shared files mode enabled, IPC is disabled
00:08:39.234  EAL: Heap on socket 0 was expanded by 66MB
00:08:39.234  EAL: Calling mem event callback 'spdk:(nil)'
00:08:39.234  EAL: request: mp_malloc_sync
00:08:39.234  EAL: No shared files mode enabled, IPC is disabled
00:08:39.234  EAL: Heap on socket 0 was shrunk by 66MB
00:08:39.492  EAL: Trying to obtain current memory policy.
00:08:39.492  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:39.492  EAL: Restoring previous memory policy: 4
00:08:39.492  EAL: Calling mem event callback 'spdk:(nil)'
00:08:39.492  EAL: request: mp_malloc_sync
00:08:39.492  EAL: No shared files mode enabled, IPC is disabled
00:08:39.492  EAL: Heap on socket 0 was expanded by 130MB
00:08:39.751  EAL: Calling mem event callback 'spdk:(nil)'
00:08:39.751  EAL: request: mp_malloc_sync
00:08:39.751  EAL: No shared files mode enabled, IPC is disabled
00:08:39.751  EAL: Heap on socket 0 was shrunk by 130MB
00:08:40.011  EAL: Trying to obtain current memory policy.
00:08:40.011  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:40.011  EAL: Restoring previous memory policy: 4
00:08:40.011  EAL: Calling mem event callback 'spdk:(nil)'
00:08:40.011  EAL: request: mp_malloc_sync
00:08:40.011  EAL: No shared files mode enabled, IPC is disabled
00:08:40.011  EAL: Heap on socket 0 was expanded by 258MB
00:08:40.947  EAL: Calling mem event callback 'spdk:(nil)'
00:08:40.947  EAL: request: mp_malloc_sync
00:08:40.947  EAL: No shared files mode enabled, IPC is disabled
00:08:40.947  EAL: Heap on socket 0 was shrunk by 258MB
00:08:41.206  EAL: Trying to obtain current memory policy.
00:08:41.206  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:41.466  EAL: Restoring previous memory policy: 4
00:08:41.466  EAL: Calling mem event callback 'spdk:(nil)'
00:08:41.466  EAL: request: mp_malloc_sync
00:08:41.466  EAL: No shared files mode enabled, IPC is disabled
00:08:41.466  EAL: Heap on socket 0 was expanded by 514MB
00:08:42.844  EAL: Calling mem event callback 'spdk:(nil)'
00:08:42.844  EAL: request: mp_malloc_sync
00:08:42.844  EAL: No shared files mode enabled, IPC is disabled
00:08:42.844  EAL: Heap on socket 0 was shrunk by 514MB
00:08:43.781  EAL: Trying to obtain current memory policy.
00:08:43.781  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:44.040  EAL: Restoring previous memory policy: 4
00:08:44.040  EAL: Calling mem event callback 'spdk:(nil)'
00:08:44.040  EAL: request: mp_malloc_sync
00:08:44.040  EAL: No shared files mode enabled, IPC is disabled
00:08:44.040  EAL: Heap on socket 0 was expanded by 1026MB
00:08:46.574  EAL: Calling mem event callback 'spdk:(nil)'
00:08:46.574  EAL: request: mp_malloc_sync
00:08:46.574  EAL: No shared files mode enabled, IPC is disabled
00:08:46.574  EAL: Heap on socket 0 was shrunk by 1026MB
00:08:48.476  passed
00:08:48.476  
00:08:48.476  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:48.476                suites      1      1    n/a      0        0
00:08:48.476                 tests      2      2      2      0        0
00:08:48.476               asserts   5250   5250   5250      0      n/a
00:08:48.476  
00:08:48.476  Elapsed time =    9.442 seconds
00:08:48.476  EAL: Calling mem event callback 'spdk:(nil)'
00:08:48.476  EAL: request: mp_malloc_sync
00:08:48.476  EAL: No shared files mode enabled, IPC is disabled
00:08:48.476  EAL: Heap on socket 0 was shrunk by 2MB
00:08:48.476  EAL: No shared files mode enabled, IPC is disabled
00:08:48.476  EAL: No shared files mode enabled, IPC is disabled
00:08:48.476  EAL: No shared files mode enabled, IPC is disabled
00:08:48.476  
00:08:48.476  real	0m9.769s
00:08:48.476  user	0m8.294s
00:08:48.476  sys	0m1.353s
00:08:48.476   13:45:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:48.476   13:45:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:08:48.476  ************************************
00:08:48.476  END TEST env_vtophys
00:08:48.476  ************************************
00:08:48.476   13:45:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:48.476   13:45:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:48.476   13:45:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:48.476   13:45:30 env -- common/autotest_common.sh@10 -- # set +x
00:08:48.476  ************************************
00:08:48.476  START TEST env_pci
00:08:48.476  ************************************
00:08:48.476   13:45:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:48.476  
00:08:48.476  
00:08:48.476       CUnit - A unit testing framework for C - Version 2.1-3
00:08:48.476       http://cunit.sourceforge.net/
00:08:48.476  
00:08:48.476  
00:08:48.476  Suite: pci
00:08:48.476    Test: pci_hook ...[2024-12-11 13:45:30.976076] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68359 has claimed it
00:08:48.476  passed
00:08:48.476  
00:08:48.476  EAL: Cannot find device (10000:00:01.0)
00:08:48.476  EAL: Failed to attach device on primary process
00:08:48.476  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:48.476                suites      1      1    n/a      0        0
00:08:48.476                 tests      1      1      1      0        0
00:08:48.476               asserts     25     25     25      0      n/a
00:08:48.476  
00:08:48.476  Elapsed time =    0.010 seconds
00:08:48.476  
00:08:48.476  real	0m0.094s
00:08:48.476  user	0m0.046s
00:08:48.476  sys	0m0.049s
00:08:48.476   13:45:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:48.476   13:45:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:08:48.476  ************************************
00:08:48.476  END TEST env_pci
00:08:48.476  ************************************
00:08:48.476   13:45:31 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:08:48.476    13:45:31 env -- env/env.sh@15 -- # uname
00:08:48.476   13:45:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:08:48.476   13:45:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:08:48.476   13:45:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:48.476   13:45:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:48.476   13:45:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:48.476   13:45:31 env -- common/autotest_common.sh@10 -- # set +x
00:08:48.476  ************************************
00:08:48.476  START TEST env_dpdk_post_init
00:08:48.476  ************************************
00:08:48.476   13:45:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:48.476  EAL: Detected CPU lcores: 10
00:08:48.476  EAL: Detected NUMA nodes: 1
00:08:48.476  EAL: Detected static linkage of DPDK
00:08:48.476  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:48.476  EAL: Selected IOVA mode 'PA'
00:08:48.735  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:48.735  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:08:48.735  Starting DPDK initialization...
00:08:48.735  Starting SPDK post initialization...
00:08:48.735  SPDK NVMe probe
00:08:48.735  Attaching to 0000:00:10.0
00:08:48.735  Attached to 0000:00:10.0
00:08:48.735  Cleaning up...
00:08:48.735  
00:08:48.735  real	0m0.316s
00:08:48.735  user	0m0.100s
00:08:48.735  sys	0m0.117s
00:08:48.735   13:45:31 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:48.735  ************************************
00:08:48.735  END TEST env_dpdk_post_init
00:08:48.735  ************************************
00:08:48.735   13:45:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:08:48.735    13:45:31 env -- env/env.sh@26 -- # uname
00:08:48.735   13:45:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:08:48.735   13:45:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:48.735   13:45:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:48.735   13:45:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:48.735   13:45:31 env -- common/autotest_common.sh@10 -- # set +x
00:08:48.735  ************************************
00:08:48.735  START TEST env_mem_callbacks
00:08:48.735  ************************************
00:08:48.735   13:45:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:48.994  EAL: Detected CPU lcores: 10
00:08:48.994  EAL: Detected NUMA nodes: 1
00:08:48.994  EAL: Detected static linkage of DPDK
00:08:48.994  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:48.994  EAL: Selected IOVA mode 'PA'
00:08:48.994  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:48.994  
00:08:48.994  
00:08:48.994       CUnit - A unit testing framework for C - Version 2.1-3
00:08:48.994       http://cunit.sourceforge.net/
00:08:48.994  
00:08:48.994  
00:08:48.994  Suite: memory
00:08:48.994    Test: test ...
00:08:48.994  register 0x200000200000 2097152
00:08:48.994  malloc 3145728
00:08:48.994  register 0x200000400000 4194304
00:08:48.994  buf 0x2000004fffc0 len 3145728 PASSED
00:08:48.994  malloc 64
00:08:48.994  buf 0x2000004ffec0 len 64 PASSED
00:08:48.994  malloc 4194304
00:08:48.994  register 0x200000800000 6291456
00:08:48.994  buf 0x2000009fffc0 len 4194304 PASSED
00:08:48.994  free 0x2000004fffc0 3145728
00:08:48.994  free 0x2000004ffec0 64
00:08:48.994  unregister 0x200000400000 4194304 PASSED
00:08:48.994  free 0x2000009fffc0 4194304
00:08:48.994  unregister 0x200000800000 6291456 PASSED
00:08:48.994  malloc 8388608
00:08:48.994  register 0x200000400000 10485760
00:08:48.994  buf 0x2000005fffc0 len 8388608 PASSED
00:08:48.994  free 0x2000005fffc0 8388608
00:08:48.994  unregister 0x200000400000 10485760 PASSED
00:08:48.994  passed
00:08:48.994  
00:08:48.994  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:48.994                suites      1      1    n/a      0        0
00:08:48.994                 tests      1      1      1      0        0
00:08:48.994               asserts     15     15     15      0      n/a
00:08:48.994  
00:08:48.994  Elapsed time =    0.095 seconds
00:08:49.252  
00:08:49.253  real	0m0.332s
00:08:49.253  user	0m0.142s
00:08:49.253  sys	0m0.092s
00:08:49.253   13:45:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:49.253  ************************************
00:08:49.253  END TEST env_mem_callbacks
00:08:49.253   13:45:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:08:49.253  ************************************
00:08:49.253  
00:08:49.253  real	0m11.340s
00:08:49.253  user	0m9.111s
00:08:49.253  sys	0m1.930s
00:08:49.253   13:45:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:49.253   13:45:31 env -- common/autotest_common.sh@10 -- # set +x
00:08:49.253  ************************************
00:08:49.253  END TEST env
00:08:49.253  ************************************
00:08:49.253   13:45:31  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:49.253   13:45:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:49.253   13:45:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:49.253   13:45:31  -- common/autotest_common.sh@10 -- # set +x
00:08:49.253  ************************************
00:08:49.253  START TEST rpc
00:08:49.253  ************************************
00:08:49.253   13:45:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:49.253  * Looking for test storage...
00:08:49.253  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:49.253    13:45:31 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:49.253     13:45:31 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:08:49.253     13:45:31 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:49.511    13:45:32 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:49.511    13:45:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:49.511    13:45:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:49.511    13:45:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:49.511    13:45:32 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:49.511    13:45:32 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:49.511    13:45:32 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:49.511    13:45:32 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:49.511    13:45:32 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:49.511    13:45:32 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:49.511    13:45:32 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:49.511    13:45:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:49.511    13:45:32 rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:49.511    13:45:32 rpc -- scripts/common.sh@345 -- # : 1
00:08:49.511    13:45:32 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:49.512    13:45:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:49.512     13:45:32 rpc -- scripts/common.sh@365 -- # decimal 1
00:08:49.512     13:45:32 rpc -- scripts/common.sh@353 -- # local d=1
00:08:49.512     13:45:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:49.512     13:45:32 rpc -- scripts/common.sh@355 -- # echo 1
00:08:49.512    13:45:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:49.512     13:45:32 rpc -- scripts/common.sh@366 -- # decimal 2
00:08:49.512     13:45:32 rpc -- scripts/common.sh@353 -- # local d=2
00:08:49.512     13:45:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:49.512     13:45:32 rpc -- scripts/common.sh@355 -- # echo 2
00:08:49.512    13:45:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:49.512    13:45:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:49.512    13:45:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:49.512    13:45:32 rpc -- scripts/common.sh@368 -- # return 0
00:08:49.512    13:45:32 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:49.512    13:45:32 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:49.512  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:49.512  		--rc genhtml_branch_coverage=1
00:08:49.512  		--rc genhtml_function_coverage=1
00:08:49.512  		--rc genhtml_legend=1
00:08:49.512  		--rc geninfo_all_blocks=1
00:08:49.512  		--rc geninfo_unexecuted_blocks=1
00:08:49.512  		
00:08:49.512  		'
00:08:49.512    13:45:32 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:49.512  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:49.512  		--rc genhtml_branch_coverage=1
00:08:49.512  		--rc genhtml_function_coverage=1
00:08:49.512  		--rc genhtml_legend=1
00:08:49.512  		--rc geninfo_all_blocks=1
00:08:49.512  		--rc geninfo_unexecuted_blocks=1
00:08:49.512  		
00:08:49.512  		'
00:08:49.512    13:45:32 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:49.512  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:49.512  		--rc genhtml_branch_coverage=1
00:08:49.512  		--rc genhtml_function_coverage=1
00:08:49.512  		--rc genhtml_legend=1
00:08:49.512  		--rc geninfo_all_blocks=1
00:08:49.512  		--rc geninfo_unexecuted_blocks=1
00:08:49.512  		
00:08:49.512  		'
00:08:49.512    13:45:32 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:49.512  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:49.512  		--rc genhtml_branch_coverage=1
00:08:49.512  		--rc genhtml_function_coverage=1
00:08:49.512  		--rc genhtml_legend=1
00:08:49.512  		--rc geninfo_all_blocks=1
00:08:49.512  		--rc geninfo_unexecuted_blocks=1
00:08:49.512  		
00:08:49.512  		'
00:08:49.512   13:45:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68486
00:08:49.512   13:45:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:49.512   13:45:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68486
00:08:49.512   13:45:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 68486 ']'
00:08:49.512   13:45:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:08:49.512   13:45:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:49.512   13:45:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:49.512   13:45:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:49.512  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:49.512   13:45:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:49.512   13:45:32 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:49.512  [2024-12-11 13:45:32.180032] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:08:49.512  [2024-12-11 13:45:32.180210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68486 ]
00:08:49.771  [2024-12-11 13:45:32.359912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:49.772  [2024-12-11 13:45:32.508273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:08:49.772  [2024-12-11 13:45:32.508347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68486' to capture a snapshot of events at runtime.
00:08:49.772  [2024-12-11 13:45:32.508363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:49.772  [2024-12-11 13:45:32.508379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:49.772  [2024-12-11 13:45:32.508394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68486 for offline analysis/debug.
00:08:49.772  [2024-12-11 13:45:32.509978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:51.151   13:45:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:51.151   13:45:33 rpc -- common/autotest_common.sh@868 -- # return 0
00:08:51.151   13:45:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:51.151   13:45:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:51.151   13:45:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:08:51.151   13:45:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:08:51.151   13:45:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:51.151   13:45:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:51.151   13:45:33 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:51.151  ************************************
00:08:51.151  START TEST rpc_integrity
00:08:51.151  ************************************
00:08:51.151   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:51.151    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.151   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:51.151    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:51.151   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:51.151    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.151   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:08:51.151    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.151    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.151   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:51.151  {
00:08:51.152  "name": "Malloc0",
00:08:51.152  "aliases": [
00:08:51.152  "e5f80c2f-1319-47a8-9a6c-75ae059089ef"
00:08:51.152  ],
00:08:51.152  "product_name": "Malloc disk",
00:08:51.152  "block_size": 512,
00:08:51.152  "num_blocks": 16384,
00:08:51.152  "uuid": "e5f80c2f-1319-47a8-9a6c-75ae059089ef",
00:08:51.152  "assigned_rate_limits": {
00:08:51.152  "rw_ios_per_sec": 0,
00:08:51.152  "rw_mbytes_per_sec": 0,
00:08:51.152  "r_mbytes_per_sec": 0,
00:08:51.152  "w_mbytes_per_sec": 0
00:08:51.152  },
00:08:51.152  "claimed": false,
00:08:51.152  "zoned": false,
00:08:51.152  "supported_io_types": {
00:08:51.152  "read": true,
00:08:51.152  "write": true,
00:08:51.152  "unmap": true,
00:08:51.152  "flush": true,
00:08:51.152  "reset": true,
00:08:51.152  "nvme_admin": false,
00:08:51.152  "nvme_io": false,
00:08:51.152  "nvme_io_md": false,
00:08:51.152  "write_zeroes": true,
00:08:51.152  "zcopy": true,
00:08:51.152  "get_zone_info": false,
00:08:51.152  "zone_management": false,
00:08:51.152  "zone_append": false,
00:08:51.152  "compare": false,
00:08:51.152  "compare_and_write": false,
00:08:51.152  "abort": true,
00:08:51.152  "seek_hole": false,
00:08:51.152  "seek_data": false,
00:08:51.152  "copy": true,
00:08:51.152  "nvme_iov_md": false
00:08:51.152  },
00:08:51.152  "memory_domains": [
00:08:51.152  {
00:08:51.152  "dma_device_id": "system",
00:08:51.152  "dma_device_type": 1
00:08:51.152  },
00:08:51.152  {
00:08:51.152  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.152  "dma_device_type": 2
00:08:51.152  }
00:08:51.152  ],
00:08:51.152  "driver_specific": {}
00:08:51.152  }
00:08:51.152  ]'
00:08:51.152    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:51.152   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:51.152   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:08:51.152   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.152   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.152  [2024-12-11 13:45:33.784354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:08:51.152  [2024-12-11 13:45:33.784460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:51.152  [2024-12-11 13:45:33.784493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280
00:08:51.152  [2024-12-11 13:45:33.784512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:51.152  [2024-12-11 13:45:33.787903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:51.152  [2024-12-11 13:45:33.787964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:51.152  Passthru0
00:08:51.152   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.152    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:51.152    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.152    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.152    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.152   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:51.152  {
00:08:51.152  "name": "Malloc0",
00:08:51.152  "aliases": [
00:08:51.152  "e5f80c2f-1319-47a8-9a6c-75ae059089ef"
00:08:51.152  ],
00:08:51.152  "product_name": "Malloc disk",
00:08:51.152  "block_size": 512,
00:08:51.152  "num_blocks": 16384,
00:08:51.152  "uuid": "e5f80c2f-1319-47a8-9a6c-75ae059089ef",
00:08:51.152  "assigned_rate_limits": {
00:08:51.152  "rw_ios_per_sec": 0,
00:08:51.152  "rw_mbytes_per_sec": 0,
00:08:51.152  "r_mbytes_per_sec": 0,
00:08:51.152  "w_mbytes_per_sec": 0
00:08:51.152  },
00:08:51.152  "claimed": true,
00:08:51.152  "claim_type": "exclusive_write",
00:08:51.152  "zoned": false,
00:08:51.152  "supported_io_types": {
00:08:51.152  "read": true,
00:08:51.152  "write": true,
00:08:51.152  "unmap": true,
00:08:51.152  "flush": true,
00:08:51.152  "reset": true,
00:08:51.152  "nvme_admin": false,
00:08:51.152  "nvme_io": false,
00:08:51.152  "nvme_io_md": false,
00:08:51.152  "write_zeroes": true,
00:08:51.152  "zcopy": true,
00:08:51.152  "get_zone_info": false,
00:08:51.152  "zone_management": false,
00:08:51.152  "zone_append": false,
00:08:51.152  "compare": false,
00:08:51.152  "compare_and_write": false,
00:08:51.152  "abort": true,
00:08:51.152  "seek_hole": false,
00:08:51.152  "seek_data": false,
00:08:51.152  "copy": true,
00:08:51.152  "nvme_iov_md": false
00:08:51.152  },
00:08:51.152  "memory_domains": [
00:08:51.152  {
00:08:51.152  "dma_device_id": "system",
00:08:51.152  "dma_device_type": 1
00:08:51.152  },
00:08:51.152  {
00:08:51.152  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.152  "dma_device_type": 2
00:08:51.152  }
00:08:51.152  ],
00:08:51.152  "driver_specific": {}
00:08:51.152  },
00:08:51.152  {
00:08:51.152  "name": "Passthru0",
00:08:51.152  "aliases": [
00:08:51.152  "ec5fbf11-f3a5-54c5-821f-938f0827279b"
00:08:51.152  ],
00:08:51.152  "product_name": "passthru",
00:08:51.152  "block_size": 512,
00:08:51.152  "num_blocks": 16384,
00:08:51.152  "uuid": "ec5fbf11-f3a5-54c5-821f-938f0827279b",
00:08:51.152  "assigned_rate_limits": {
00:08:51.152  "rw_ios_per_sec": 0,
00:08:51.152  "rw_mbytes_per_sec": 0,
00:08:51.152  "r_mbytes_per_sec": 0,
00:08:51.152  "w_mbytes_per_sec": 0
00:08:51.152  },
00:08:51.152  "claimed": false,
00:08:51.152  "zoned": false,
00:08:51.152  "supported_io_types": {
00:08:51.152  "read": true,
00:08:51.152  "write": true,
00:08:51.152  "unmap": true,
00:08:51.152  "flush": true,
00:08:51.152  "reset": true,
00:08:51.152  "nvme_admin": false,
00:08:51.152  "nvme_io": false,
00:08:51.152  "nvme_io_md": false,
00:08:51.152  "write_zeroes": true,
00:08:51.152  "zcopy": true,
00:08:51.152  "get_zone_info": false,
00:08:51.152  "zone_management": false,
00:08:51.152  "zone_append": false,
00:08:51.152  "compare": false,
00:08:51.152  "compare_and_write": false,
00:08:51.152  "abort": true,
00:08:51.152  "seek_hole": false,
00:08:51.152  "seek_data": false,
00:08:51.152  "copy": true,
00:08:51.152  "nvme_iov_md": false
00:08:51.152  },
00:08:51.152  "memory_domains": [
00:08:51.152  {
00:08:51.152  "dma_device_id": "system",
00:08:51.152  "dma_device_type": 1
00:08:51.152  },
00:08:51.152  {
00:08:51.152  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.152  "dma_device_type": 2
00:08:51.152  }
00:08:51.152  ],
00:08:51.152  "driver_specific": {
00:08:51.152  "passthru": {
00:08:51.152  "name": "Passthru0",
00:08:51.152  "base_bdev_name": "Malloc0"
00:08:51.152  }
00:08:51.152  }
00:08:51.152  }
00:08:51.152  ]'
00:08:51.153    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:51.153   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:51.153   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.153   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.153    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:51.153    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.153    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.153    13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.153   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:51.153    13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:51.153   13:45:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:51.153  
00:08:51.153  real	0m0.212s
00:08:51.153  user	0m0.050s
00:08:51.153  sys	0m0.054s
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:51.153  ************************************
00:08:51.153  END TEST rpc_integrity
00:08:51.153   13:45:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.153  ************************************
00:08:51.412   13:45:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:08:51.412   13:45:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:51.412   13:45:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:51.412   13:45:33 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:51.412  ************************************
00:08:51.412  START TEST rpc_plugins
00:08:51.412  ************************************
00:08:51.412   13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:08:51.412    13:45:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:08:51.412    13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.412    13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:51.412    13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.412   13:45:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:08:51.412    13:45:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:08:51.412    13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.412    13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:51.412    13:45:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.412   13:45:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:08:51.412  {
00:08:51.412  "name": "Malloc1",
00:08:51.412  "aliases": [
00:08:51.412  "0165b7b9-aaf0-4d15-82a9-fb5ec18b7461"
00:08:51.412  ],
00:08:51.412  "product_name": "Malloc disk",
00:08:51.412  "block_size": 4096,
00:08:51.412  "num_blocks": 256,
00:08:51.412  "uuid": "0165b7b9-aaf0-4d15-82a9-fb5ec18b7461",
00:08:51.412  "assigned_rate_limits": {
00:08:51.412  "rw_ios_per_sec": 0,
00:08:51.413  "rw_mbytes_per_sec": 0,
00:08:51.413  "r_mbytes_per_sec": 0,
00:08:51.413  "w_mbytes_per_sec": 0
00:08:51.413  },
00:08:51.413  "claimed": false,
00:08:51.413  "zoned": false,
00:08:51.413  "supported_io_types": {
00:08:51.413  "read": true,
00:08:51.413  "write": true,
00:08:51.413  "unmap": true,
00:08:51.413  "flush": true,
00:08:51.413  "reset": true,
00:08:51.413  "nvme_admin": false,
00:08:51.413  "nvme_io": false,
00:08:51.413  "nvme_io_md": false,
00:08:51.413  "write_zeroes": true,
00:08:51.413  "zcopy": true,
00:08:51.413  "get_zone_info": false,
00:08:51.413  "zone_management": false,
00:08:51.413  "zone_append": false,
00:08:51.413  "compare": false,
00:08:51.413  "compare_and_write": false,
00:08:51.413  "abort": true,
00:08:51.413  "seek_hole": false,
00:08:51.413  "seek_data": false,
00:08:51.413  "copy": true,
00:08:51.413  "nvme_iov_md": false
00:08:51.413  },
00:08:51.413  "memory_domains": [
00:08:51.413  {
00:08:51.413  "dma_device_id": "system",
00:08:51.413  "dma_device_type": 1
00:08:51.413  },
00:08:51.413  {
00:08:51.413  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.413  "dma_device_type": 2
00:08:51.413  }
00:08:51.413  ],
00:08:51.413  "driver_specific": {}
00:08:51.413  }
00:08:51.413  ]'
00:08:51.413    13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:08:51.413   13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:08:51.413   13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:08:51.413   13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.413   13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:51.413   13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.413    13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:08:51.413    13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.413    13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:51.413    13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.413   13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:08:51.413    13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:08:51.413   13:45:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:08:51.413  
00:08:51.413  real	0m0.089s
00:08:51.413  user	0m0.025s
00:08:51.413  sys	0m0.025s
00:08:51.413  ************************************
00:08:51.413  END TEST rpc_plugins
00:08:51.413   13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:51.413   13:45:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:51.413  ************************************
00:08:51.413   13:45:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:08:51.413   13:45:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:51.413   13:45:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:51.413   13:45:34 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:51.413  ************************************
00:08:51.413  START TEST rpc_trace_cmd_test
00:08:51.413  ************************************
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:08:51.413  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68486",
00:08:51.413  "tpoint_group_mask": "0x8",
00:08:51.413  "iscsi_conn": {
00:08:51.413  "mask": "0x2",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "scsi": {
00:08:51.413  "mask": "0x4",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "bdev": {
00:08:51.413  "mask": "0x8",
00:08:51.413  "tpoint_mask": "0xffffffffffffffff"
00:08:51.413  },
00:08:51.413  "nvmf_rdma": {
00:08:51.413  "mask": "0x10",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "nvmf_tcp": {
00:08:51.413  "mask": "0x20",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "ftl": {
00:08:51.413  "mask": "0x40",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "blobfs": {
00:08:51.413  "mask": "0x80",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "dsa": {
00:08:51.413  "mask": "0x200",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "thread": {
00:08:51.413  "mask": "0x400",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "nvme_pcie": {
00:08:51.413  "mask": "0x800",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "iaa": {
00:08:51.413  "mask": "0x1000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "nvme_tcp": {
00:08:51.413  "mask": "0x2000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "bdev_nvme": {
00:08:51.413  "mask": "0x4000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "sock": {
00:08:51.413  "mask": "0x8000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "blob": {
00:08:51.413  "mask": "0x10000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "bdev_raid": {
00:08:51.413  "mask": "0x20000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  },
00:08:51.413  "scheduler": {
00:08:51.413  "mask": "0x40000",
00:08:51.413  "tpoint_mask": "0x0"
00:08:51.413  }
00:08:51.413  }'
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:08:51.413    13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:08:51.413  
00:08:51.413  real	0m0.080s
00:08:51.413  user	0m0.038s
00:08:51.413  sys	0m0.036s
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:51.413   13:45:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:51.413  ************************************
00:08:51.413  END TEST rpc_trace_cmd_test
00:08:51.414  ************************************
00:08:51.673   13:45:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:08:51.673   13:45:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:08:51.673   13:45:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:08:51.673   13:45:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:51.673   13:45:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:51.673   13:45:34 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:51.673  ************************************
00:08:51.673  START TEST rpc_daemon_integrity
00:08:51.673  ************************************
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:51.673  {
00:08:51.673  "name": "Malloc2",
00:08:51.673  "aliases": [
00:08:51.673  "720520e1-5fb5-4ce6-99ee-dd7e1f05d19f"
00:08:51.673  ],
00:08:51.673  "product_name": "Malloc disk",
00:08:51.673  "block_size": 512,
00:08:51.673  "num_blocks": 16384,
00:08:51.673  "uuid": "720520e1-5fb5-4ce6-99ee-dd7e1f05d19f",
00:08:51.673  "assigned_rate_limits": {
00:08:51.673  "rw_ios_per_sec": 0,
00:08:51.673  "rw_mbytes_per_sec": 0,
00:08:51.673  "r_mbytes_per_sec": 0,
00:08:51.673  "w_mbytes_per_sec": 0
00:08:51.673  },
00:08:51.673  "claimed": false,
00:08:51.673  "zoned": false,
00:08:51.673  "supported_io_types": {
00:08:51.673  "read": true,
00:08:51.673  "write": true,
00:08:51.673  "unmap": true,
00:08:51.673  "flush": true,
00:08:51.673  "reset": true,
00:08:51.673  "nvme_admin": false,
00:08:51.673  "nvme_io": false,
00:08:51.673  "nvme_io_md": false,
00:08:51.673  "write_zeroes": true,
00:08:51.673  "zcopy": true,
00:08:51.673  "get_zone_info": false,
00:08:51.673  "zone_management": false,
00:08:51.673  "zone_append": false,
00:08:51.673  "compare": false,
00:08:51.673  "compare_and_write": false,
00:08:51.673  "abort": true,
00:08:51.673  "seek_hole": false,
00:08:51.673  "seek_data": false,
00:08:51.673  "copy": true,
00:08:51.673  "nvme_iov_md": false
00:08:51.673  },
00:08:51.673  "memory_domains": [
00:08:51.673  {
00:08:51.673  "dma_device_id": "system",
00:08:51.673  "dma_device_type": 1
00:08:51.673  },
00:08:51.673  {
00:08:51.673  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.673  "dma_device_type": 2
00:08:51.673  }
00:08:51.673  ],
00:08:51.673  "driver_specific": {}
00:08:51.673  }
00:08:51.673  ]'
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.673  [2024-12-11 13:45:34.324136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:08:51.673  [2024-12-11 13:45:34.324217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:51.673  [2024-12-11 13:45:34.324245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480
00:08:51.673  [2024-12-11 13:45:34.324261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:51.673  [2024-12-11 13:45:34.327095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:51.673  [2024-12-11 13:45:34.327169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:51.673  Passthru0
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.673    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.673   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:51.673  {
00:08:51.673  "name": "Malloc2",
00:08:51.673  "aliases": [
00:08:51.674  "720520e1-5fb5-4ce6-99ee-dd7e1f05d19f"
00:08:51.674  ],
00:08:51.674  "product_name": "Malloc disk",
00:08:51.674  "block_size": 512,
00:08:51.674  "num_blocks": 16384,
00:08:51.674  "uuid": "720520e1-5fb5-4ce6-99ee-dd7e1f05d19f",
00:08:51.674  "assigned_rate_limits": {
00:08:51.674  "rw_ios_per_sec": 0,
00:08:51.674  "rw_mbytes_per_sec": 0,
00:08:51.674  "r_mbytes_per_sec": 0,
00:08:51.674  "w_mbytes_per_sec": 0
00:08:51.674  },
00:08:51.674  "claimed": true,
00:08:51.674  "claim_type": "exclusive_write",
00:08:51.674  "zoned": false,
00:08:51.674  "supported_io_types": {
00:08:51.674  "read": true,
00:08:51.674  "write": true,
00:08:51.674  "unmap": true,
00:08:51.674  "flush": true,
00:08:51.674  "reset": true,
00:08:51.674  "nvme_admin": false,
00:08:51.674  "nvme_io": false,
00:08:51.674  "nvme_io_md": false,
00:08:51.674  "write_zeroes": true,
00:08:51.674  "zcopy": true,
00:08:51.674  "get_zone_info": false,
00:08:51.674  "zone_management": false,
00:08:51.674  "zone_append": false,
00:08:51.674  "compare": false,
00:08:51.674  "compare_and_write": false,
00:08:51.674  "abort": true,
00:08:51.674  "seek_hole": false,
00:08:51.674  "seek_data": false,
00:08:51.674  "copy": true,
00:08:51.674  "nvme_iov_md": false
00:08:51.674  },
00:08:51.674  "memory_domains": [
00:08:51.674  {
00:08:51.674  "dma_device_id": "system",
00:08:51.674  "dma_device_type": 1
00:08:51.674  },
00:08:51.674  {
00:08:51.674  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.674  "dma_device_type": 2
00:08:51.674  }
00:08:51.674  ],
00:08:51.674  "driver_specific": {}
00:08:51.674  },
00:08:51.674  {
00:08:51.674  "name": "Passthru0",
00:08:51.674  "aliases": [
00:08:51.674  "8080aab2-38b4-561d-ba45-f801d1a92e67"
00:08:51.674  ],
00:08:51.674  "product_name": "passthru",
00:08:51.674  "block_size": 512,
00:08:51.674  "num_blocks": 16384,
00:08:51.674  "uuid": "8080aab2-38b4-561d-ba45-f801d1a92e67",
00:08:51.674  "assigned_rate_limits": {
00:08:51.674  "rw_ios_per_sec": 0,
00:08:51.674  "rw_mbytes_per_sec": 0,
00:08:51.674  "r_mbytes_per_sec": 0,
00:08:51.674  "w_mbytes_per_sec": 0
00:08:51.674  },
00:08:51.674  "claimed": false,
00:08:51.674  "zoned": false,
00:08:51.674  "supported_io_types": {
00:08:51.674  "read": true,
00:08:51.674  "write": true,
00:08:51.674  "unmap": true,
00:08:51.674  "flush": true,
00:08:51.674  "reset": true,
00:08:51.674  "nvme_admin": false,
00:08:51.674  "nvme_io": false,
00:08:51.674  "nvme_io_md": false,
00:08:51.674  "write_zeroes": true,
00:08:51.674  "zcopy": true,
00:08:51.674  "get_zone_info": false,
00:08:51.674  "zone_management": false,
00:08:51.674  "zone_append": false,
00:08:51.674  "compare": false,
00:08:51.674  "compare_and_write": false,
00:08:51.674  "abort": true,
00:08:51.674  "seek_hole": false,
00:08:51.674  "seek_data": false,
00:08:51.674  "copy": true,
00:08:51.674  "nvme_iov_md": false
00:08:51.674  },
00:08:51.674  "memory_domains": [
00:08:51.674  {
00:08:51.674  "dma_device_id": "system",
00:08:51.674  "dma_device_type": 1
00:08:51.674  },
00:08:51.674  {
00:08:51.674  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.674  "dma_device_type": 2
00:08:51.674  }
00:08:51.674  ],
00:08:51.674  "driver_specific": {
00:08:51.674  "passthru": {
00:08:51.674  "name": "Passthru0",
00:08:51.674  "base_bdev_name": "Malloc2"
00:08:51.674  }
00:08:51.674  }
00:08:51.674  }
00:08:51.674  ]'
00:08:51.674    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.674    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:51.674    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:51.674    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.674    13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:51.674    13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:51.674  
00:08:51.674  real	0m0.206s
00:08:51.674  user	0m0.039s
00:08:51.674  sys	0m0.062s
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:51.674  ************************************
00:08:51.674  END TEST rpc_daemon_integrity
00:08:51.674  ************************************
00:08:51.674   13:45:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:51.934   13:45:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:08:51.934   13:45:34 rpc -- rpc/rpc.sh@84 -- # killprocess 68486
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 68486 ']'
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@958 -- # kill -0 68486
00:08:51.934    13:45:34 rpc -- common/autotest_common.sh@959 -- # uname
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:51.934    13:45:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68486
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:51.934  killing process with pid 68486
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68486'
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@973 -- # kill 68486
00:08:51.934   13:45:34 rpc -- common/autotest_common.sh@978 -- # wait 68486
00:08:55.220  
00:08:55.220  real	0m5.366s
00:08:55.220  user	0m5.314s
00:08:55.220  sys	0m1.007s
00:08:55.220   13:45:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.220   13:45:37 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:55.220  ************************************
00:08:55.220  END TEST rpc
00:08:55.220  ************************************
00:08:55.220   13:45:37  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:08:55.220   13:45:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.220   13:45:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.220   13:45:37  -- common/autotest_common.sh@10 -- # set +x
00:08:55.220  ************************************
00:08:55.220  START TEST skip_rpc
00:08:55.220  ************************************
00:08:55.220   13:45:37 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:08:55.220  * Looking for test storage...
00:08:55.220  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:55.220     13:45:37 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:08:55.220     13:45:37 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@345 -- # : 1
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:55.220     13:45:37 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:55.220    13:45:37 skip_rpc -- scripts/common.sh@368 -- # return 0
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:55.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.220  		--rc genhtml_branch_coverage=1
00:08:55.220  		--rc genhtml_function_coverage=1
00:08:55.220  		--rc genhtml_legend=1
00:08:55.220  		--rc geninfo_all_blocks=1
00:08:55.220  		--rc geninfo_unexecuted_blocks=1
00:08:55.220  		
00:08:55.220  		'
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:55.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.220  		--rc genhtml_branch_coverage=1
00:08:55.220  		--rc genhtml_function_coverage=1
00:08:55.220  		--rc genhtml_legend=1
00:08:55.220  		--rc geninfo_all_blocks=1
00:08:55.220  		--rc geninfo_unexecuted_blocks=1
00:08:55.220  		
00:08:55.220  		'
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:55.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.220  		--rc genhtml_branch_coverage=1
00:08:55.220  		--rc genhtml_function_coverage=1
00:08:55.220  		--rc genhtml_legend=1
00:08:55.220  		--rc geninfo_all_blocks=1
00:08:55.220  		--rc geninfo_unexecuted_blocks=1
00:08:55.220  		
00:08:55.220  		'
00:08:55.220    13:45:37 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:55.220  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.220  		--rc genhtml_branch_coverage=1
00:08:55.220  		--rc genhtml_function_coverage=1
00:08:55.220  		--rc genhtml_legend=1
00:08:55.220  		--rc geninfo_all_blocks=1
00:08:55.220  		--rc geninfo_unexecuted_blocks=1
00:08:55.220  		
00:08:55.220  		'
00:08:55.220   13:45:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:08:55.220   13:45:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:08:55.220   13:45:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:08:55.220   13:45:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.220   13:45:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.220   13:45:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:55.220  ************************************
00:08:55.220  START TEST skip_rpc
00:08:55.220  ************************************
00:08:55.220   13:45:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:08:55.220   13:45:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=68715
00:08:55.220   13:45:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:55.220   13:45:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:08:55.220   13:45:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:08:55.220  [2024-12-11 13:45:37.631193] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:08:55.220  [2024-12-11 13:45:37.631370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68715 ]
00:08:55.220  [2024-12-11 13:45:37.827099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:55.220  [2024-12-11 13:45:37.970756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:00.487    13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 68715
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 68715 ']'
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 68715
00:09:00.487    13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:00.487    13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68715
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:00.487  killing process with pid 68715
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68715'
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 68715
00:09:00.487   13:45:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 68715
00:09:03.016  
00:09:03.016  real	0m7.905s
00:09:03.016  user	0m7.318s
00:09:03.016  sys	0m0.530s
00:09:03.016   13:45:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:03.016   13:45:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:03.016  ************************************
00:09:03.016  END TEST skip_rpc
00:09:03.016  ************************************
00:09:03.016   13:45:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:09:03.016   13:45:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:03.016   13:45:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:03.016   13:45:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:03.017  ************************************
00:09:03.017  START TEST skip_rpc_with_json
00:09:03.017  ************************************
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=68830
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 68830
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 68830 ']'
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:03.017  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:03.017   13:45:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:03.017  [2024-12-11 13:45:45.546012] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:03.017  [2024-12-11 13:45:45.546150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68830 ]
00:09:03.017  [2024-12-11 13:45:45.726097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:03.275  [2024-12-11 13:45:45.899262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:04.649  [2024-12-11 13:45:47.188789] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:09:04.649  request:
00:09:04.649  {
00:09:04.649  "trtype": "tcp",
00:09:04.649  "method": "nvmf_get_transports",
00:09:04.649  "req_id": 1
00:09:04.649  }
00:09:04.649  Got JSON-RPC error response
00:09:04.649  response:
00:09:04.649  {
00:09:04.649  "code": -19,
00:09:04.649  "message": "No such device"
00:09:04.649  }
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:09:04.649   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:04.650  [2024-12-11 13:45:47.200992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:04.650   13:45:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:09:04.650  {
00:09:04.650  "subsystems": [
00:09:04.650  {
00:09:04.650  "subsystem": "scheduler",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "framework_set_scheduler",
00:09:04.650  "params": {
00:09:04.650  "name": "static"
00:09:04.650  }
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "vmd",
00:09:04.650  "config": []
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "sock",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "sock_set_default_impl",
00:09:04.650  "params": {
00:09:04.650  "impl_name": "posix"
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "sock_impl_set_options",
00:09:04.650  "params": {
00:09:04.650  "impl_name": "ssl",
00:09:04.650  "recv_buf_size": 4096,
00:09:04.650  "send_buf_size": 4096,
00:09:04.650  "enable_recv_pipe": true,
00:09:04.650  "enable_quickack": false,
00:09:04.650  "enable_placement_id": 0,
00:09:04.650  "enable_zerocopy_send_server": true,
00:09:04.650  "enable_zerocopy_send_client": false,
00:09:04.650  "zerocopy_threshold": 0,
00:09:04.650  "tls_version": 0,
00:09:04.650  "enable_ktls": false
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "sock_impl_set_options",
00:09:04.650  "params": {
00:09:04.650  "impl_name": "posix",
00:09:04.650  "recv_buf_size": 2097152,
00:09:04.650  "send_buf_size": 2097152,
00:09:04.650  "enable_recv_pipe": true,
00:09:04.650  "enable_quickack": false,
00:09:04.650  "enable_placement_id": 0,
00:09:04.650  "enable_zerocopy_send_server": true,
00:09:04.650  "enable_zerocopy_send_client": false,
00:09:04.650  "zerocopy_threshold": 0,
00:09:04.650  "tls_version": 0,
00:09:04.650  "enable_ktls": false
00:09:04.650  }
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "iobuf",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "iobuf_set_options",
00:09:04.650  "params": {
00:09:04.650  "small_pool_count": 8192,
00:09:04.650  "large_pool_count": 1024,
00:09:04.650  "small_bufsize": 8192,
00:09:04.650  "large_bufsize": 135168,
00:09:04.650  "enable_numa": false
00:09:04.650  }
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "keyring",
00:09:04.650  "config": []
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "fsdev",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "fsdev_set_opts",
00:09:04.650  "params": {
00:09:04.650  "fsdev_io_pool_size": 65535,
00:09:04.650  "fsdev_io_cache_size": 256
00:09:04.650  }
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "accel",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "accel_set_options",
00:09:04.650  "params": {
00:09:04.650  "small_cache_size": 128,
00:09:04.650  "large_cache_size": 16,
00:09:04.650  "task_count": 2048,
00:09:04.650  "sequence_count": 2048,
00:09:04.650  "buf_count": 2048
00:09:04.650  }
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "bdev",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "bdev_set_options",
00:09:04.650  "params": {
00:09:04.650  "bdev_io_pool_size": 65535,
00:09:04.650  "bdev_io_cache_size": 256,
00:09:04.650  "bdev_auto_examine": true,
00:09:04.650  "iobuf_small_cache_size": 128,
00:09:04.650  "iobuf_large_cache_size": 16
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "bdev_raid_set_options",
00:09:04.650  "params": {
00:09:04.650  "process_window_size_kb": 1024,
00:09:04.650  "process_max_bandwidth_mb_sec": 0
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "bdev_nvme_set_options",
00:09:04.650  "params": {
00:09:04.650  "action_on_timeout": "none",
00:09:04.650  "timeout_us": 0,
00:09:04.650  "timeout_admin_us": 0,
00:09:04.650  "keep_alive_timeout_ms": 10000,
00:09:04.650  "arbitration_burst": 0,
00:09:04.650  "low_priority_weight": 0,
00:09:04.650  "medium_priority_weight": 0,
00:09:04.650  "high_priority_weight": 0,
00:09:04.650  "nvme_adminq_poll_period_us": 10000,
00:09:04.650  "nvme_ioq_poll_period_us": 0,
00:09:04.650  "io_queue_requests": 0,
00:09:04.650  "delay_cmd_submit": true,
00:09:04.650  "transport_retry_count": 4,
00:09:04.650  "bdev_retry_count": 3,
00:09:04.650  "transport_ack_timeout": 0,
00:09:04.650  "ctrlr_loss_timeout_sec": 0,
00:09:04.650  "reconnect_delay_sec": 0,
00:09:04.650  "fast_io_fail_timeout_sec": 0,
00:09:04.650  "disable_auto_failback": false,
00:09:04.650  "generate_uuids": false,
00:09:04.650  "transport_tos": 0,
00:09:04.650  "nvme_error_stat": false,
00:09:04.650  "rdma_srq_size": 0,
00:09:04.650  "io_path_stat": false,
00:09:04.650  "allow_accel_sequence": false,
00:09:04.650  "rdma_max_cq_size": 0,
00:09:04.650  "rdma_cm_event_timeout_ms": 0,
00:09:04.650  "dhchap_digests": [
00:09:04.650  "sha256",
00:09:04.650  "sha384",
00:09:04.650  "sha512"
00:09:04.650  ],
00:09:04.650  "dhchap_dhgroups": [
00:09:04.650  "null",
00:09:04.650  "ffdhe2048",
00:09:04.650  "ffdhe3072",
00:09:04.650  "ffdhe4096",
00:09:04.650  "ffdhe6144",
00:09:04.650  "ffdhe8192"
00:09:04.650  ],
00:09:04.650  "rdma_umr_per_io": false
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "bdev_nvme_set_hotplug",
00:09:04.650  "params": {
00:09:04.650  "period_us": 100000,
00:09:04.650  "enable": false
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "bdev_iscsi_set_options",
00:09:04.650  "params": {
00:09:04.650  "timeout_sec": 30
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "bdev_wait_for_examine"
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "nvmf",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "nvmf_set_config",
00:09:04.650  "params": {
00:09:04.650  "discovery_filter": "match_any",
00:09:04.650  "admin_cmd_passthru": {
00:09:04.650  "identify_ctrlr": false
00:09:04.650  },
00:09:04.650  "dhchap_digests": [
00:09:04.650  "sha256",
00:09:04.650  "sha384",
00:09:04.650  "sha512"
00:09:04.650  ],
00:09:04.650  "dhchap_dhgroups": [
00:09:04.650  "null",
00:09:04.650  "ffdhe2048",
00:09:04.650  "ffdhe3072",
00:09:04.650  "ffdhe4096",
00:09:04.650  "ffdhe6144",
00:09:04.650  "ffdhe8192"
00:09:04.650  ]
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "nvmf_set_max_subsystems",
00:09:04.650  "params": {
00:09:04.650  "max_subsystems": 1024
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "nvmf_set_crdt",
00:09:04.650  "params": {
00:09:04.650  "crdt1": 0,
00:09:04.650  "crdt2": 0,
00:09:04.650  "crdt3": 0
00:09:04.650  }
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "method": "nvmf_create_transport",
00:09:04.650  "params": {
00:09:04.650  "trtype": "TCP",
00:09:04.650  "max_queue_depth": 128,
00:09:04.650  "max_io_qpairs_per_ctrlr": 127,
00:09:04.650  "in_capsule_data_size": 4096,
00:09:04.650  "max_io_size": 131072,
00:09:04.650  "io_unit_size": 131072,
00:09:04.650  "max_aq_depth": 128,
00:09:04.650  "num_shared_buffers": 511,
00:09:04.650  "buf_cache_size": 4294967295,
00:09:04.650  "dif_insert_or_strip": false,
00:09:04.650  "zcopy": false,
00:09:04.650  "c2h_success": true,
00:09:04.650  "sock_priority": 0,
00:09:04.650  "abort_timeout_sec": 1,
00:09:04.650  "ack_timeout": 0,
00:09:04.650  "data_wr_pool_size": 0
00:09:04.650  }
00:09:04.650  }
00:09:04.650  ]
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "nbd",
00:09:04.650  "config": []
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "ublk",
00:09:04.650  "config": []
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "vhost_blk",
00:09:04.650  "config": []
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "scsi",
00:09:04.650  "config": null
00:09:04.650  },
00:09:04.650  {
00:09:04.650  "subsystem": "iscsi",
00:09:04.650  "config": [
00:09:04.650  {
00:09:04.650  "method": "iscsi_set_options",
00:09:04.650  "params": {
00:09:04.650  "node_base": "iqn.2016-06.io.spdk",
00:09:04.650  "max_sessions": 128,
00:09:04.650  "max_connections_per_session": 2,
00:09:04.650  "max_queue_depth": 64,
00:09:04.650  "default_time2wait": 2,
00:09:04.650  "default_time2retain": 20,
00:09:04.650  "first_burst_length": 8192,
00:09:04.650  "immediate_data": true,
00:09:04.650  "allow_duplicated_isid": false,
00:09:04.650  "error_recovery_level": 0,
00:09:04.650  "nop_timeout": 60,
00:09:04.650  "nop_in_interval": 30,
00:09:04.650  "disable_chap": false,
00:09:04.650  "require_chap": false,
00:09:04.651  "mutual_chap": false,
00:09:04.651  "chap_group": 0,
00:09:04.651  "max_large_datain_per_connection": 64,
00:09:04.651  "max_r2t_per_connection": 4,
00:09:04.651  "pdu_pool_size": 36864,
00:09:04.651  "immediate_data_pool_size": 16384,
00:09:04.651  "data_out_pool_size": 2048
00:09:04.651  }
00:09:04.651  }
00:09:04.651  ]
00:09:04.651  },
00:09:04.651  {
00:09:04.651  "subsystem": "vhost_scsi",
00:09:04.651  "config": []
00:09:04.651  }
00:09:04.651  ]
00:09:04.651  }
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 68830
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 68830 ']'
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 68830
00:09:04.651    13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:04.651    13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68830
00:09:04.651  killing process with pid 68830
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68830'
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 68830
00:09:04.651   13:45:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 68830
00:09:07.933   13:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=68891
00:09:07.933   13:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:09:07.933   13:45:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 68891
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 68891 ']'
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 68891
00:09:13.224    13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:13.224    13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68891
00:09:13.224  killing process with pid 68891
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68891'
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 68891
00:09:13.224   13:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 68891
00:09:15.146   13:45:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:09:15.146   13:45:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:09:15.146  ************************************
00:09:15.146  END TEST skip_rpc_with_json
00:09:15.146  ************************************
00:09:15.146  
00:09:15.146  real	0m12.410s
00:09:15.146  user	0m11.653s
00:09:15.146  sys	0m1.294s
00:09:15.146   13:45:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:15.146   13:45:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:15.146   13:45:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:09:15.146   13:45:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:15.146   13:45:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:15.146   13:45:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:15.405  ************************************
00:09:15.405  START TEST skip_rpc_with_delay
00:09:15.405  ************************************
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:15.405    13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:15.405    13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:09:15.405   13:45:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:15.405  [2024-12-11 13:45:58.033214] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:09:15.405   13:45:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:09:15.405   13:45:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:15.405   13:45:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:15.405   13:45:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:15.405  
00:09:15.405  real	0m0.177s
00:09:15.405  user	0m0.091s
00:09:15.405  sys	0m0.087s
00:09:15.405   13:45:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:15.405   13:45:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:09:15.405  ************************************
00:09:15.405  END TEST skip_rpc_with_delay
00:09:15.405  ************************************
00:09:15.405    13:45:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:09:15.405   13:45:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:09:15.405   13:45:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:09:15.405   13:45:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:15.405   13:45:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:15.405   13:45:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:15.405  ************************************
00:09:15.405  START TEST exit_on_failed_rpc_init
00:09:15.405  ************************************
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69025
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69025
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69025 ']'
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:15.405  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:15.405   13:45:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:09:15.665  [2024-12-11 13:45:58.269758] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:15.665  [2024-12-11 13:45:58.269952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69025 ]
00:09:15.923  [2024-12-11 13:45:58.473173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:15.923  [2024-12-11 13:45:58.683212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:17.303    13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:17.303    13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:09:17.303   13:45:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:09:17.303  [2024-12-11 13:45:59.841444] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:17.303  [2024-12-11 13:45:59.841623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69049 ]
00:09:17.303  [2024-12-11 13:46:00.046516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:17.561  [2024-12-11 13:46:00.230899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:17.561  [2024-12-11 13:46:00.231007] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:09:17.561  [2024-12-11 13:46:00.231037] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:09:17.561  [2024-12-11 13:46:00.231056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69025
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69025 ']'
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69025
00:09:17.820    13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:17.820    13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69025
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:17.820  killing process with pid 69025
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69025'
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69025
00:09:17.820   13:46:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69025
00:09:20.353  
00:09:20.353  real	0m4.918s
00:09:20.353  user	0m5.312s
00:09:20.353  sys	0m0.702s
00:09:20.353   13:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:20.353   13:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:09:20.353  ************************************
00:09:20.353  END TEST exit_on_failed_rpc_init
00:09:20.353  ************************************
00:09:20.611   13:46:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:09:20.611  
00:09:20.611  real	0m25.840s
00:09:20.611  user	0m24.539s
00:09:20.611  sys	0m2.898s
00:09:20.611   13:46:03 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:20.611   13:46:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:20.611  ************************************
00:09:20.611  END TEST skip_rpc
00:09:20.611  ************************************
00:09:20.611   13:46:03  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:09:20.611   13:46:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:20.611   13:46:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:20.611   13:46:03  -- common/autotest_common.sh@10 -- # set +x
00:09:20.612  ************************************
00:09:20.612  START TEST rpc_client
00:09:20.612  ************************************
00:09:20.612   13:46:03 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:09:20.612  * Looking for test storage...
00:09:20.612  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:09:20.612    13:46:03 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:20.612     13:46:03 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:09:20.612     13:46:03 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:20.871    13:46:03 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@345 -- # : 1
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@353 -- # local d=1
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@355 -- # echo 1
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@353 -- # local d=2
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:20.871     13:46:03 rpc_client -- scripts/common.sh@355 -- # echo 2
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:20.871    13:46:03 rpc_client -- scripts/common.sh@368 -- # return 0
00:09:20.871    13:46:03 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:20.871    13:46:03 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:20.871  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:20.871  		--rc genhtml_branch_coverage=1
00:09:20.871  		--rc genhtml_function_coverage=1
00:09:20.871  		--rc genhtml_legend=1
00:09:20.871  		--rc geninfo_all_blocks=1
00:09:20.871  		--rc geninfo_unexecuted_blocks=1
00:09:20.871  		
00:09:20.871  		'
00:09:20.871    13:46:03 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:20.871  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:20.871  		--rc genhtml_branch_coverage=1
00:09:20.871  		--rc genhtml_function_coverage=1
00:09:20.871  		--rc genhtml_legend=1
00:09:20.871  		--rc geninfo_all_blocks=1
00:09:20.871  		--rc geninfo_unexecuted_blocks=1
00:09:20.871  		
00:09:20.871  		'
00:09:20.871    13:46:03 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:20.871  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:20.871  		--rc genhtml_branch_coverage=1
00:09:20.871  		--rc genhtml_function_coverage=1
00:09:20.871  		--rc genhtml_legend=1
00:09:20.871  		--rc geninfo_all_blocks=1
00:09:20.871  		--rc geninfo_unexecuted_blocks=1
00:09:20.871  		
00:09:20.871  		'
00:09:20.871    13:46:03 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:20.871  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:20.871  		--rc genhtml_branch_coverage=1
00:09:20.871  		--rc genhtml_function_coverage=1
00:09:20.871  		--rc genhtml_legend=1
00:09:20.871  		--rc geninfo_all_blocks=1
00:09:20.871  		--rc geninfo_unexecuted_blocks=1
00:09:20.871  		
00:09:20.871  		'
00:09:20.871   13:46:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:09:20.871  OK
00:09:20.871   13:46:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:09:20.871  
00:09:20.871  real	0m0.271s
00:09:20.871  user	0m0.152s
00:09:20.871  sys	0m0.137s
00:09:20.871   13:46:03 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:20.871  ************************************
00:09:20.871   13:46:03 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:09:20.871  END TEST rpc_client
00:09:20.871  ************************************
00:09:20.871   13:46:03  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:09:20.871   13:46:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:20.871   13:46:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:20.871   13:46:03  -- common/autotest_common.sh@10 -- # set +x
00:09:20.871  ************************************
00:09:20.871  START TEST json_config
00:09:20.871  ************************************
00:09:20.872   13:46:03 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:09:20.872    13:46:03 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:20.872     13:46:03 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:20.872     13:46:03 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:09:21.132    13:46:03 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:21.132    13:46:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:21.132    13:46:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:21.132    13:46:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:21.132    13:46:03 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:09:21.132    13:46:03 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:09:21.132    13:46:03 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:09:21.132    13:46:03 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:09:21.132    13:46:03 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:09:21.132    13:46:03 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:09:21.132    13:46:03 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:09:21.132    13:46:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:21.132    13:46:03 json_config -- scripts/common.sh@344 -- # case "$op" in
00:09:21.132    13:46:03 json_config -- scripts/common.sh@345 -- # : 1
00:09:21.132    13:46:03 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:21.132    13:46:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:21.132     13:46:03 json_config -- scripts/common.sh@365 -- # decimal 1
00:09:21.132     13:46:03 json_config -- scripts/common.sh@353 -- # local d=1
00:09:21.132     13:46:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:21.132     13:46:03 json_config -- scripts/common.sh@355 -- # echo 1
00:09:21.132    13:46:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:09:21.132     13:46:03 json_config -- scripts/common.sh@366 -- # decimal 2
00:09:21.132     13:46:03 json_config -- scripts/common.sh@353 -- # local d=2
00:09:21.132     13:46:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:21.132     13:46:03 json_config -- scripts/common.sh@355 -- # echo 2
00:09:21.132    13:46:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:09:21.132    13:46:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:21.132    13:46:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:21.132    13:46:03 json_config -- scripts/common.sh@368 -- # return 0
00:09:21.132    13:46:03 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:21.132    13:46:03 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:21.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.132  		--rc genhtml_branch_coverage=1
00:09:21.132  		--rc genhtml_function_coverage=1
00:09:21.132  		--rc genhtml_legend=1
00:09:21.132  		--rc geninfo_all_blocks=1
00:09:21.132  		--rc geninfo_unexecuted_blocks=1
00:09:21.132  		
00:09:21.132  		'
00:09:21.132    13:46:03 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:21.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.132  		--rc genhtml_branch_coverage=1
00:09:21.132  		--rc genhtml_function_coverage=1
00:09:21.132  		--rc genhtml_legend=1
00:09:21.132  		--rc geninfo_all_blocks=1
00:09:21.132  		--rc geninfo_unexecuted_blocks=1
00:09:21.132  		
00:09:21.132  		'
00:09:21.132    13:46:03 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:21.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.132  		--rc genhtml_branch_coverage=1
00:09:21.132  		--rc genhtml_function_coverage=1
00:09:21.132  		--rc genhtml_legend=1
00:09:21.132  		--rc geninfo_all_blocks=1
00:09:21.132  		--rc geninfo_unexecuted_blocks=1
00:09:21.132  		
00:09:21.132  		'
00:09:21.132    13:46:03 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:21.132  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:21.132  		--rc genhtml_branch_coverage=1
00:09:21.132  		--rc genhtml_function_coverage=1
00:09:21.132  		--rc genhtml_legend=1
00:09:21.132  		--rc geninfo_all_blocks=1
00:09:21.132  		--rc geninfo_unexecuted_blocks=1
00:09:21.132  		
00:09:21.132  		'
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:21.132     13:46:03 json_config -- nvmf/common.sh@7 -- # uname -s
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:21.132     13:46:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eab8dd07-70d5-4d8b-b2aa-3ed47cbc8689
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=eab8dd07-70d5-4d8b-b2aa-3ed47cbc8689
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:21.132     13:46:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:09:21.132     13:46:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:21.132     13:46:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:21.132     13:46:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:21.132      13:46:03 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:21.132      13:46:03 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:21.132      13:46:03 json_config -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:21.132      13:46:03 json_config -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:21.132      13:46:03 json_config -- paths/export.sh@6 -- # export PATH
00:09:21.132      13:46:03 json_config -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@51 -- # : 0
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:21.132  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:21.132    13:46:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:09:21.132  INFO: JSON configuration test init
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:09:21.132   13:46:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:21.133  Waiting for target to run...
00:09:21.133  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:21.133   13:46:03 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:09:21.133   13:46:03 json_config -- json_config/common.sh@9 -- # local app=target
00:09:21.133   13:46:03 json_config -- json_config/common.sh@10 -- # shift
00:09:21.133   13:46:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:21.133   13:46:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:21.133   13:46:03 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:09:21.133   13:46:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:21.133   13:46:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:21.133   13:46:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69219
00:09:21.133   13:46:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:21.133   13:46:03 json_config -- json_config/common.sh@25 -- # waitforlisten 69219 /var/tmp/spdk_tgt.sock
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@835 -- # '[' -z 69219 ']'
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:21.133   13:46:03 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:21.133   13:46:03 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:09:21.133  [2024-12-11 13:46:03.833482] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:21.133  [2024-12-11 13:46:03.833627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69219 ]
00:09:21.701  [2024-12-11 13:46:04.277317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:21.701  [2024-12-11 13:46:04.454295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:22.269  
00:09:22.269   13:46:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:22.269   13:46:04 json_config -- common/autotest_common.sh@868 -- # return 0
00:09:22.269   13:46:04 json_config -- json_config/common.sh@26 -- # echo ''
00:09:22.269   13:46:04 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:09:22.269   13:46:04 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:09:22.269   13:46:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:22.269   13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:22.269   13:46:04 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:09:22.269   13:46:04 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:09:22.269   13:46:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:22.269   13:46:04 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:22.269   13:46:04 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:09:22.269   13:46:04 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:09:22.269   13:46:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:09:23.648   13:46:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:23.648   13:46:06 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:09:23.648   13:46:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:09:23.648    13:46:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:09:23.648    13:46:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:09:23.648    13:46:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@51 -- # local get_types
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@54 -- # sort
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:09:23.907   13:46:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:23.907   13:46:06 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@62 -- # return 0
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@285 -- # [[ 1 -eq 1 ]]
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@286 -- # create_bdev_subsystem_config
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@112 -- # timing_enter create_bdev_subsystem_config
00:09:23.907   13:46:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:23.907   13:46:06 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@114 -- # expected_notifications=()
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@114 -- # local expected_notifications
00:09:23.907   13:46:06 json_config -- json_config/json_config.sh@118 -- # expected_notifications+=($(get_notifications))
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@118 -- # get_notifications
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:23.907    13:46:06 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:23.907     13:46:06 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:09:23.907     13:46:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:09:23.907     13:46:06 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:09:24.166    13:46:06 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:09:24.166    13:46:06 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:24.166    13:46:06 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:24.166   13:46:06 json_config -- json_config/json_config.sh@120 -- # [[ 1 -eq 1 ]]
00:09:24.166   13:46:06 json_config -- json_config/json_config.sh@121 -- # local lvol_store_base_bdev=Nvme0n1
00:09:24.166   13:46:06 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_split_create Nvme0n1 2
00:09:24.166   13:46:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2
00:09:24.166  Nvme0n1p0 Nvme0n1p1
00:09:24.425   13:46:06 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_split_create Malloc0 3
00:09:24.425   13:46:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3
00:09:24.425  [2024-12-11 13:46:07.164531] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:24.425  [2024-12-11 13:46:07.164614] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:24.425  
00:09:24.425   13:46:07 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3
00:09:24.425   13:46:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3
00:09:24.684  Malloc3
00:09:24.684   13:46:07 json_config -- json_config/json_config.sh@126 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:09:24.684   13:46:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:09:24.942  [2024-12-11 13:46:07.628144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:24.942  [2024-12-11 13:46:07.628236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:24.942  [2024-12-11 13:46:07.628263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80
00:09:24.942  [2024-12-11 13:46:07.628280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:24.942  [2024-12-11 13:46:07.631247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:24.942  [2024-12-11 13:46:07.631301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:09:24.942  PTBdevFromMalloc3
00:09:24.942   13:46:07 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_null_create Null0 32 512
00:09:24.942   13:46:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512
00:09:25.201  Null0
00:09:25.201   13:46:07 json_config -- json_config/json_config.sh@130 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0
00:09:25.201   13:46:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0
00:09:25.459  Malloc0
00:09:25.459   13:46:08 json_config -- json_config/json_config.sh@131 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1
00:09:25.459   13:46:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1
00:09:25.718  Malloc1
00:09:25.719   13:46:08 json_config -- json_config/json_config.sh@144 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1)
00:09:25.719   13:46:08 json_config -- json_config/json_config.sh@147 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400
00:09:25.977  102400+0 records in
00:09:25.977  102400+0 records out
00:09:25.977  104857600 bytes (105 MB, 100 MiB) copied, 0.302062 s, 347 MB/s
00:09:25.977   13:46:08 json_config -- json_config/json_config.sh@148 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024
00:09:25.978   13:46:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024
00:09:26.236  aio_disk
00:09:26.236   13:46:08 json_config -- json_config/json_config.sh@149 -- # expected_notifications+=(bdev_register:aio_disk)
00:09:26.236   13:46:08 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:09:26.236   13:46:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:09:26.495  9a25ff96-8f51-4b04-b154-45b403e70058
00:09:26.495   13:46:09 json_config -- json_config/json_config.sh@161 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)")
00:09:26.495    13:46:09 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32
00:09:26.495    13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32
00:09:26.754    13:46:09 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32
00:09:26.754    13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32
00:09:27.012    13:46:09 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:09:27.012    13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:09:27.270    13:46:09 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0
00:09:27.270    13:46:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@164 -- # [[ 0 -eq 1 ]]
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@179 -- # [[ 0 -eq 1 ]]
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@185 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:98f6d440-9a1c-4818-b8a4-a4088df1b477 bdev_register:6f495b15-4e42-4421-93b6-4472c28dea84 bdev_register:e38f0278-e080-44ed-b7c4-2c28c7491cfe bdev_register:9411482a-81ca-4a53-99f4-80ab77cfd1a0
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@74 -- # local events_to_check
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@75 -- # local recorded_events
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@78 -- # events_to_check=($(printf '%s\n' "$@" | sort))
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@78 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:98f6d440-9a1c-4818-b8a4-a4088df1b477 bdev_register:6f495b15-4e42-4421-93b6-4472c28dea84 bdev_register:e38f0278-e080-44ed-b7c4-2c28c7491cfe bdev_register:9411482a-81ca-4a53-99f4-80ab77cfd1a0
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@78 -- # sort
00:09:27.529   13:46:10 json_config -- json_config/json_config.sh@79 -- # recorded_events=($(get_notifications | sort))
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@79 -- # get_notifications
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.529    13:46:10 json_config -- json_config/json_config.sh@79 -- # sort
00:09:27.529     13:46:10 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:09:27.529     13:46:10 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:09:27.529     13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p1
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p0
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc3
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:PTBdevFromMalloc3
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Null0
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p2
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p1
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p0
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc1
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:aio_disk
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:98f6d440-9a1c-4818-b8a4-a4088df1b477
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:6f495b15-4e42-4421-93b6-4472c28dea84
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:e38f0278-e080-44ed-b7c4-2c28c7491cfe
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:9411482a-81ca-4a53-99f4-80ab77cfd1a0
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:27.788   13:46:10 json_config -- json_config/json_config.sh@81 -- # [[ bdev_register:6f495b15-4e42-4421-93b6-4472c28dea84 bdev_register:9411482a-81ca-4a53-99f4-80ab77cfd1a0 bdev_register:98f6d440-9a1c-4818-b8a4-a4088df1b477 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e38f0278-e080-44ed-b7c4-2c28c7491cfe != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\f\4\9\5\b\1\5\-\4\e\4\2\-\4\4\2\1\-\9\3\b\6\-\4\4\7\2\c\2\8\d\e\a\8\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\4\1\1\4\8\2\a\-\8\1\c\a\-\4\a\5\3\-\9\9\f\4\-\8\0\a\b\7\7\c\f\d\1\a\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\8\f\6\d\4\4\0\-\9\a\1\c\-\4\8\1\8\-\b\8\a\4\-\a\4\0\8\8\d\f\1\b\4\7\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\3\8\f\0\2\7\8\-\e\0\8\0\-\4\4\e\d\-\b\7\c\4\-\2\c\2\8\c\7\4\9\1\c\f\e ]]
00:09:27.788   13:46:10 json_config -- json_config/json_config.sh@93 -- # cat
00:09:27.788    13:46:10 json_config -- json_config/json_config.sh@93 -- # printf ' %s\n' bdev_register:6f495b15-4e42-4421-93b6-4472c28dea84 bdev_register:9411482a-81ca-4a53-99f4-80ab77cfd1a0 bdev_register:98f6d440-9a1c-4818-b8a4-a4088df1b477 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e38f0278-e080-44ed-b7c4-2c28c7491cfe
00:09:27.788  Expected events matched:
00:09:27.788   bdev_register:6f495b15-4e42-4421-93b6-4472c28dea84
00:09:27.788   bdev_register:9411482a-81ca-4a53-99f4-80ab77cfd1a0
00:09:27.788   bdev_register:98f6d440-9a1c-4818-b8a4-a4088df1b477
00:09:27.788   bdev_register:Malloc0
00:09:27.788   bdev_register:Malloc0p0
00:09:27.788   bdev_register:Malloc0p1
00:09:27.788   bdev_register:Malloc0p2
00:09:27.788   bdev_register:Malloc1
00:09:27.788   bdev_register:Malloc3
00:09:27.788   bdev_register:Null0
00:09:27.788   bdev_register:Nvme0n1
00:09:27.788   bdev_register:Nvme0n1p0
00:09:27.788   bdev_register:Nvme0n1p1
00:09:27.788   bdev_register:PTBdevFromMalloc3
00:09:27.788   bdev_register:aio_disk
00:09:27.788   bdev_register:e38f0278-e080-44ed-b7c4-2c28c7491cfe
00:09:27.788   13:46:10 json_config -- json_config/json_config.sh@187 -- # timing_exit create_bdev_subsystem_config
00:09:27.788   13:46:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:27.788   13:46:10 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:27.788   13:46:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:09:27.788   13:46:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:09:27.788   13:46:10 json_config -- json_config/json_config.sh@297 -- # [[ 0 -eq 1 ]]
00:09:27.789   13:46:10 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:09:27.789   13:46:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:27.789   13:46:10 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:27.789   13:46:10 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:09:27.789   13:46:10 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:27.789   13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:28.047  MallocBdevForConfigChangeCheck
00:09:28.047   13:46:10 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:09:28.047   13:46:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:28.047   13:46:10 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:28.306   13:46:10 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:09:28.306   13:46:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:28.565  INFO: shutting down applications...
00:09:28.565   13:46:11 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:09:28.565   13:46:11 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:09:28.565   13:46:11 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:09:28.565   13:46:11 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:09:28.565   13:46:11 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:09:28.918  [2024-12-11 13:46:11.454136] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test
00:09:28.918  Calling clear_vhost_scsi_subsystem
00:09:28.918  Calling clear_iscsi_subsystem
00:09:28.918  Calling clear_vhost_blk_subsystem
00:09:28.918  Calling clear_ublk_subsystem
00:09:28.918  Calling clear_nbd_subsystem
00:09:28.918  Calling clear_nvmf_subsystem
00:09:28.918  Calling clear_bdev_subsystem
00:09:28.918   13:46:11 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:09:28.918   13:46:11 json_config -- json_config/json_config.sh@350 -- # count=100
00:09:28.918   13:46:11 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:09:28.918   13:46:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:28.918   13:46:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:09:28.918   13:46:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:09:29.504   13:46:12 json_config -- json_config/json_config.sh@352 -- # break
00:09:29.504   13:46:12 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:09:29.504   13:46:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:09:29.504   13:46:12 json_config -- json_config/common.sh@31 -- # local app=target
00:09:29.504   13:46:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:29.504   13:46:12 json_config -- json_config/common.sh@35 -- # [[ -n 69219 ]]
00:09:29.504   13:46:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69219
00:09:29.504   13:46:12 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:29.504   13:46:12 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:29.504   13:46:12 json_config -- json_config/common.sh@41 -- # kill -0 69219
00:09:29.504   13:46:12 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:09:30.070   13:46:12 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:09:30.070   13:46:12 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:30.070   13:46:12 json_config -- json_config/common.sh@41 -- # kill -0 69219
00:09:30.070   13:46:12 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:09:30.637   13:46:13 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:09:30.637   13:46:13 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:30.637   13:46:13 json_config -- json_config/common.sh@41 -- # kill -0 69219
00:09:30.637   13:46:13 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:09:30.897   13:46:13 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:09:30.897   13:46:13 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:30.897   13:46:13 json_config -- json_config/common.sh@41 -- # kill -0 69219
00:09:30.897   13:46:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:30.897   13:46:13 json_config -- json_config/common.sh@43 -- # break
00:09:30.897   13:46:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:30.897  SPDK target shutdown done
00:09:30.897  INFO: relaunching applications...
00:09:30.897   13:46:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:30.897   13:46:13 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:09:30.897   13:46:13 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:30.897   13:46:13 json_config -- json_config/common.sh@9 -- # local app=target
00:09:30.897   13:46:13 json_config -- json_config/common.sh@10 -- # shift
00:09:30.897   13:46:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:30.897   13:46:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:30.897   13:46:13 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:09:30.897   13:46:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:30.897   13:46:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:30.897  Waiting for target to run...
00:09:30.897   13:46:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69481
00:09:30.897   13:46:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:30.897   13:46:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:30.897   13:46:13 json_config -- json_config/common.sh@25 -- # waitforlisten 69481 /var/tmp/spdk_tgt.sock
00:09:30.897   13:46:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 69481 ']'
00:09:30.897   13:46:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:30.897   13:46:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:30.897  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:30.897   13:46:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:30.897   13:46:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:30.897   13:46:13 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:31.155  [2024-12-11 13:46:13.744942] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:31.155  [2024-12-11 13:46:13.745138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69481 ]
00:09:31.412  [2024-12-11 13:46:14.192243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:31.670  [2024-12-11 13:46:14.333313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:33.044  [2024-12-11 13:46:15.389713] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:09:33.044  [2024-12-11 13:46:15.389829] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:09:33.044  [2024-12-11 13:46:15.397649] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:33.044  [2024-12-11 13:46:15.397711] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:33.044  [2024-12-11 13:46:15.405670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:33.044  [2024-12-11 13:46:15.405723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:09:33.044  [2024-12-11 13:46:15.405746] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:09:33.044  [2024-12-11 13:46:15.512326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:33.044  [2024-12-11 13:46:15.512423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:33.044  [2024-12-11 13:46:15.512466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680
00:09:33.044  [2024-12-11 13:46:15.512485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:33.044  [2024-12-11 13:46:15.513473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:33.044  [2024-12-11 13:46:15.513699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:09:33.044   13:46:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:33.044   13:46:15 json_config -- common/autotest_common.sh@868 -- # return 0
00:09:33.044   13:46:15 json_config -- json_config/common.sh@26 -- # echo ''
00:09:33.044  
00:09:33.044   13:46:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:09:33.044   13:46:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:09:33.044  INFO: Checking if target configuration is the same...
00:09:33.044   13:46:15 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:33.044    13:46:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:09:33.044    13:46:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:33.044  + '[' 2 -ne 2 ']'
00:09:33.044  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:33.044  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:33.044  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:33.044  +++ basename /dev/fd/62
00:09:33.044  ++ mktemp /tmp/62.XXX
00:09:33.044  + tmp_file_1=/tmp/62.VYv
00:09:33.044  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:33.044  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:33.044  + tmp_file_2=/tmp/spdk_tgt_config.json.8Br
00:09:33.044  + ret=0
00:09:33.044  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:33.611  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:33.611  + diff -u /tmp/62.VYv /tmp/spdk_tgt_config.json.8Br
00:09:33.611  INFO: JSON config files are the same
00:09:33.611  + echo 'INFO: JSON config files are the same'
00:09:33.611  + rm /tmp/62.VYv /tmp/spdk_tgt_config.json.8Br
00:09:33.611  + exit 0
00:09:33.611  INFO: changing configuration and checking if this can be detected...
00:09:33.611   13:46:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:09:33.611   13:46:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:09:33.611   13:46:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:33.611   13:46:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:33.869    13:46:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:09:33.869   13:46:16 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:33.869    13:46:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:33.869  + '[' 2 -ne 2 ']'
00:09:33.869  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:33.869  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:33.869  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:33.869  +++ basename /dev/fd/62
00:09:33.869  ++ mktemp /tmp/62.XXX
00:09:33.869  + tmp_file_1=/tmp/62.1TU
00:09:33.869  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:33.869  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:33.869  + tmp_file_2=/tmp/spdk_tgt_config.json.iwO
00:09:33.869  + ret=0
00:09:33.869  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:34.436  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:34.436  + diff -u /tmp/62.1TU /tmp/spdk_tgt_config.json.iwO
00:09:34.436  + ret=1
00:09:34.436  + echo '=== Start of file: /tmp/62.1TU ==='
00:09:34.436  + cat /tmp/62.1TU
00:09:34.436  + echo '=== End of file: /tmp/62.1TU ==='
00:09:34.436  + echo ''
00:09:34.436  + echo '=== Start of file: /tmp/spdk_tgt_config.json.iwO ==='
00:09:34.436  + cat /tmp/spdk_tgt_config.json.iwO
00:09:34.436  + echo '=== End of file: /tmp/spdk_tgt_config.json.iwO ==='
00:09:34.436  + echo ''
00:09:34.436  + rm /tmp/62.1TU /tmp/spdk_tgt_config.json.iwO
00:09:34.436  + exit 1
00:09:34.436  INFO: configuration change detected.
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:09:34.436   13:46:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:34.436   13:46:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 69481 ]]
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:09:34.436   13:46:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:34.436   13:46:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@193 -- # [[ 1 -eq 1 ]]
00:09:34.436   13:46:17 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0
00:09:34.436   13:46:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0
00:09:34.695   13:46:17 json_config -- json_config/json_config.sh@195 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0
00:09:34.695   13:46:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0
00:09:34.954   13:46:17 json_config -- json_config/json_config.sh@196 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0
00:09:34.954   13:46:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0
00:09:35.212   13:46:17 json_config -- json_config/json_config.sh@197 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test
00:09:35.212   13:46:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test
00:09:35.471    13:46:18 json_config -- json_config/json_config.sh@200 -- # uname -s
00:09:35.471   13:46:18 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:09:35.471   13:46:18 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:09:35.471   13:46:18 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:09:35.471   13:46:18 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:35.471   13:46:18 json_config -- json_config/json_config.sh@330 -- # killprocess 69481
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@954 -- # '[' -z 69481 ']'
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@958 -- # kill -0 69481
00:09:35.471    13:46:18 json_config -- common/autotest_common.sh@959 -- # uname
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:35.471    13:46:18 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69481
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:35.471  killing process with pid 69481
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69481'
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@973 -- # kill 69481
00:09:35.471   13:46:18 json_config -- common/autotest_common.sh@978 -- # wait 69481
00:09:36.408   13:46:19 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:36.408   13:46:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:09:36.408   13:46:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:36.408   13:46:19 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:36.667  INFO: Success
00:09:36.667   13:46:19 json_config -- json_config/json_config.sh@335 -- # return 0
00:09:36.667   13:46:19 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:09:36.667  ************************************
00:09:36.667  END TEST json_config
00:09:36.667  ************************************
00:09:36.667  
00:09:36.667  real	0m15.666s
00:09:36.667  user	0m21.343s
00:09:36.667  sys	0m2.997s
00:09:36.667   13:46:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:36.667   13:46:19 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:36.667   13:46:19  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:36.667   13:46:19  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:36.667   13:46:19  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:36.667   13:46:19  -- common/autotest_common.sh@10 -- # set +x
00:09:36.667  ************************************
00:09:36.667  START TEST json_config_extra_key
00:09:36.667  ************************************
00:09:36.667   13:46:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:36.667    13:46:19 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:36.667     13:46:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:09:36.667     13:46:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:36.667    13:46:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:36.667     13:46:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:36.667    13:46:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:09:36.667    13:46:19 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:36.667    13:46:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:36.667  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:36.668  		--rc genhtml_branch_coverage=1
00:09:36.668  		--rc genhtml_function_coverage=1
00:09:36.668  		--rc genhtml_legend=1
00:09:36.668  		--rc geninfo_all_blocks=1
00:09:36.668  		--rc geninfo_unexecuted_blocks=1
00:09:36.668  		
00:09:36.668  		'
00:09:36.668    13:46:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:36.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:36.668  		--rc genhtml_branch_coverage=1
00:09:36.668  		--rc genhtml_function_coverage=1
00:09:36.668  		--rc genhtml_legend=1
00:09:36.668  		--rc geninfo_all_blocks=1
00:09:36.668  		--rc geninfo_unexecuted_blocks=1
00:09:36.668  		
00:09:36.668  		'
00:09:36.668    13:46:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:36.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:36.668  		--rc genhtml_branch_coverage=1
00:09:36.668  		--rc genhtml_function_coverage=1
00:09:36.668  		--rc genhtml_legend=1
00:09:36.668  		--rc geninfo_all_blocks=1
00:09:36.668  		--rc geninfo_unexecuted_blocks=1
00:09:36.668  		
00:09:36.668  		'
00:09:36.668    13:46:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:36.668  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:36.668  		--rc genhtml_branch_coverage=1
00:09:36.668  		--rc genhtml_function_coverage=1
00:09:36.668  		--rc genhtml_legend=1
00:09:36.668  		--rc geninfo_all_blocks=1
00:09:36.668  		--rc geninfo_unexecuted_blocks=1
00:09:36.668  		
00:09:36.668  		'
00:09:36.668   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:36.668     13:46:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:36.668    13:46:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:36.668     13:46:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eab8dd07-70d5-4d8b-b2aa-3ed47cbc8689
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=eab8dd07-70d5-4d8b-b2aa-3ed47cbc8689
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:36.928     13:46:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:09:36.928     13:46:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:36.928     13:46:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:36.928     13:46:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:36.928      13:46:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:36.928      13:46:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:36.928      13:46:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:36.928      13:46:19 json_config_extra_key -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:36.928      13:46:19 json_config_extra_key -- paths/export.sh@6 -- # export PATH
00:09:36.928      13:46:19 json_config_extra_key -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:36.928  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:36.928    13:46:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:36.928  INFO: launching applications...
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:09:36.928   13:46:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69668
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:36.928  Waiting for target to run...
00:09:36.928   13:46:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69668 /var/tmp/spdk_tgt.sock
00:09:36.928   13:46:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69668 ']'
00:09:36.928   13:46:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:36.928   13:46:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:36.928  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:36.928   13:46:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:36.928   13:46:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:36.928   13:46:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:36.928  [2024-12-11 13:46:19.568224] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:36.928  [2024-12-11 13:46:19.568398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69668 ]
00:09:37.495  [2024-12-11 13:46:20.000518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:37.495  [2024-12-11 13:46:20.127411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:38.445   13:46:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:38.445   13:46:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:09:38.445  
00:09:38.445  INFO: shutting down applications...
00:09:38.445   13:46:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:09:38.445   13:46:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69668 ]]
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69668
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:38.445   13:46:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:39.012   13:46:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:39.012   13:46:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:39.012   13:46:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:39.012   13:46:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:39.579   13:46:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:39.579   13:46:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:39.579   13:46:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:39.579   13:46:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:39.837   13:46:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:39.837   13:46:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:39.837   13:46:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:39.837   13:46:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:40.403   13:46:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:40.403   13:46:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:40.403   13:46:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:40.403   13:46:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:40.968   13:46:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:40.968   13:46:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:40.968   13:46:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:40.968   13:46:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69668
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@43 -- # break
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:41.535  SPDK target shutdown done
00:09:41.535   13:46:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:41.535  Success
00:09:41.535   13:46:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:09:41.535  ************************************
00:09:41.535  END TEST json_config_extra_key
00:09:41.535  ************************************
00:09:41.535  
00:09:41.535  real	0m4.831s
00:09:41.535  user	0m4.361s
00:09:41.535  sys	0m0.720s
00:09:41.535   13:46:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:41.535   13:46:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:41.535   13:46:24  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:41.535   13:46:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:41.535   13:46:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:41.535   13:46:24  -- common/autotest_common.sh@10 -- # set +x
00:09:41.535  ************************************
00:09:41.535  START TEST alias_rpc
00:09:41.535  ************************************
00:09:41.535   13:46:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:41.535  * Looking for test storage...
00:09:41.535  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:09:41.535    13:46:24 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:41.535     13:46:24 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:09:41.535     13:46:24 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:41.794    13:46:24 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@345 -- # : 1
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:41.794     13:46:24 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:41.794    13:46:24 alias_rpc -- scripts/common.sh@368 -- # return 0
00:09:41.794    13:46:24 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:41.794    13:46:24 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:41.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.794  		--rc genhtml_branch_coverage=1
00:09:41.794  		--rc genhtml_function_coverage=1
00:09:41.794  		--rc genhtml_legend=1
00:09:41.794  		--rc geninfo_all_blocks=1
00:09:41.794  		--rc geninfo_unexecuted_blocks=1
00:09:41.794  		
00:09:41.794  		'
00:09:41.794    13:46:24 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:41.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.794  		--rc genhtml_branch_coverage=1
00:09:41.794  		--rc genhtml_function_coverage=1
00:09:41.794  		--rc genhtml_legend=1
00:09:41.794  		--rc geninfo_all_blocks=1
00:09:41.794  		--rc geninfo_unexecuted_blocks=1
00:09:41.794  		
00:09:41.794  		'
00:09:41.794    13:46:24 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:41.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.794  		--rc genhtml_branch_coverage=1
00:09:41.794  		--rc genhtml_function_coverage=1
00:09:41.794  		--rc genhtml_legend=1
00:09:41.794  		--rc geninfo_all_blocks=1
00:09:41.794  		--rc geninfo_unexecuted_blocks=1
00:09:41.794  		
00:09:41.794  		'
00:09:41.794    13:46:24 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:41.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.794  		--rc genhtml_branch_coverage=1
00:09:41.794  		--rc genhtml_function_coverage=1
00:09:41.794  		--rc genhtml_legend=1
00:09:41.794  		--rc geninfo_all_blocks=1
00:09:41.794  		--rc geninfo_unexecuted_blocks=1
00:09:41.794  		
00:09:41.794  		'
00:09:41.794   13:46:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:41.794   13:46:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69785
00:09:41.794   13:46:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69785
00:09:41.794   13:46:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69785 ']'
00:09:41.794   13:46:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:41.794  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:41.794   13:46:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:41.794   13:46:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:41.794   13:46:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:41.794   13:46:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:41.794   13:46:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:41.794  [2024-12-11 13:46:24.450568] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:41.794  [2024-12-11 13:46:24.451283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69785 ]
00:09:42.053  [2024-12-11 13:46:24.650621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:42.311  [2024-12-11 13:46:24.845435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:43.697   13:46:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:09:43.697   13:46:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69785
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69785 ']'
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69785
00:09:43.697    13:46:26 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:43.697    13:46:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69785
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69785'
00:09:43.697  killing process with pid 69785
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 69785
00:09:43.697   13:46:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 69785
00:09:46.986  
00:09:46.986  real	0m5.182s
00:09:46.986  user	0m5.041s
00:09:46.986  sys	0m0.861s
00:09:46.986   13:46:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:46.986   13:46:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:46.986  ************************************
00:09:46.986  END TEST alias_rpc
00:09:46.986  ************************************
00:09:46.986   13:46:29  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:09:46.986   13:46:29  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:46.986   13:46:29  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:46.986   13:46:29  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:46.986   13:46:29  -- common/autotest_common.sh@10 -- # set +x
00:09:46.986  ************************************
00:09:46.986  START TEST spdkcli_tcp
00:09:46.986  ************************************
00:09:46.986   13:46:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:46.986  * Looking for test storage...
00:09:46.986  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:09:46.986    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:46.986     13:46:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:09:46.986     13:46:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:46.986    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:46.986     13:46:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:46.986    13:46:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:09:46.986    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:46.986    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:46.986  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.986  		--rc genhtml_branch_coverage=1
00:09:46.986  		--rc genhtml_function_coverage=1
00:09:46.986  		--rc genhtml_legend=1
00:09:46.986  		--rc geninfo_all_blocks=1
00:09:46.986  		--rc geninfo_unexecuted_blocks=1
00:09:46.986  		
00:09:46.986  		'
00:09:46.986    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:46.986  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.986  		--rc genhtml_branch_coverage=1
00:09:46.986  		--rc genhtml_function_coverage=1
00:09:46.986  		--rc genhtml_legend=1
00:09:46.987  		--rc geninfo_all_blocks=1
00:09:46.987  		--rc geninfo_unexecuted_blocks=1
00:09:46.987  		
00:09:46.987  		'
00:09:46.987    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:46.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.987  		--rc genhtml_branch_coverage=1
00:09:46.987  		--rc genhtml_function_coverage=1
00:09:46.987  		--rc genhtml_legend=1
00:09:46.987  		--rc geninfo_all_blocks=1
00:09:46.987  		--rc geninfo_unexecuted_blocks=1
00:09:46.987  		
00:09:46.987  		'
00:09:46.987    13:46:29 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:46.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.987  		--rc genhtml_branch_coverage=1
00:09:46.987  		--rc genhtml_function_coverage=1
00:09:46.987  		--rc genhtml_legend=1
00:09:46.987  		--rc geninfo_all_blocks=1
00:09:46.987  		--rc geninfo_unexecuted_blocks=1
00:09:46.987  		
00:09:46.987  		'
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:09:46.987    13:46:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:09:46.987    13:46:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69903
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69903
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69903 ']'
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:46.987  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:46.987   13:46:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:09:46.987   13:46:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:46.987  [2024-12-11 13:46:29.697731] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:46.987  [2024-12-11 13:46:29.697910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69903 ]
00:09:47.246  [2024-12-11 13:46:29.894317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:47.504  [2024-12-11 13:46:30.066742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:47.504  [2024-12-11 13:46:30.066783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:48.879   13:46:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:48.879   13:46:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:09:48.879   13:46:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69926
00:09:48.879   13:46:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:09:48.879   13:46:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:09:48.879  [
00:09:48.879    "spdk_get_version",
00:09:48.879    "rpc_get_methods",
00:09:48.879    "notify_get_notifications",
00:09:48.879    "notify_get_types",
00:09:48.879    "trace_get_info",
00:09:48.879    "trace_get_tpoint_group_mask",
00:09:48.879    "trace_disable_tpoint_group",
00:09:48.879    "trace_enable_tpoint_group",
00:09:48.879    "trace_clear_tpoint_mask",
00:09:48.879    "trace_set_tpoint_mask",
00:09:48.879    "fsdev_set_opts",
00:09:48.879    "fsdev_get_opts",
00:09:48.879    "framework_get_pci_devices",
00:09:48.879    "framework_get_config",
00:09:48.879    "framework_get_subsystems",
00:09:48.879    "keyring_get_keys",
00:09:48.879    "iobuf_get_stats",
00:09:48.879    "iobuf_set_options",
00:09:48.879    "sock_get_default_impl",
00:09:48.879    "sock_set_default_impl",
00:09:48.879    "sock_impl_set_options",
00:09:48.879    "sock_impl_get_options",
00:09:48.879    "vmd_rescan",
00:09:48.879    "vmd_remove_device",
00:09:48.879    "vmd_enable",
00:09:48.879    "accel_get_stats",
00:09:48.879    "accel_set_options",
00:09:48.879    "accel_set_driver",
00:09:48.879    "accel_crypto_key_destroy",
00:09:48.879    "accel_crypto_keys_get",
00:09:48.879    "accel_crypto_key_create",
00:09:48.879    "accel_assign_opc",
00:09:48.879    "accel_get_module_info",
00:09:48.879    "accel_get_opc_assignments",
00:09:48.879    "bdev_get_histogram",
00:09:48.879    "bdev_enable_histogram",
00:09:48.879    "bdev_set_qos_limit",
00:09:48.879    "bdev_set_qd_sampling_period",
00:09:48.879    "bdev_get_bdevs",
00:09:48.879    "bdev_reset_iostat",
00:09:48.879    "bdev_get_iostat",
00:09:48.879    "bdev_examine",
00:09:48.879    "bdev_wait_for_examine",
00:09:48.879    "bdev_set_options",
00:09:48.879    "scsi_get_devices",
00:09:48.879    "thread_set_cpumask",
00:09:48.879    "scheduler_set_options",
00:09:48.879    "framework_get_governor",
00:09:48.879    "framework_get_scheduler",
00:09:48.879    "framework_set_scheduler",
00:09:48.879    "framework_get_reactors",
00:09:48.879    "thread_get_io_channels",
00:09:48.879    "thread_get_pollers",
00:09:48.879    "thread_get_stats",
00:09:48.879    "framework_monitor_context_switch",
00:09:48.879    "spdk_kill_instance",
00:09:48.879    "log_enable_timestamps",
00:09:48.879    "log_get_flags",
00:09:48.879    "log_clear_flag",
00:09:48.879    "log_set_flag",
00:09:48.879    "log_get_level",
00:09:48.879    "log_set_level",
00:09:48.879    "log_get_print_level",
00:09:48.879    "log_set_print_level",
00:09:48.879    "framework_enable_cpumask_locks",
00:09:48.879    "framework_disable_cpumask_locks",
00:09:48.879    "framework_wait_init",
00:09:48.879    "framework_start_init",
00:09:48.879    "virtio_blk_create_transport",
00:09:48.879    "virtio_blk_get_transports",
00:09:48.879    "vhost_controller_set_coalescing",
00:09:48.879    "vhost_get_controllers",
00:09:48.879    "vhost_delete_controller",
00:09:48.879    "vhost_create_blk_controller",
00:09:48.879    "vhost_scsi_controller_remove_target",
00:09:48.879    "vhost_scsi_controller_add_target",
00:09:48.879    "vhost_start_scsi_controller",
00:09:48.879    "vhost_create_scsi_controller",
00:09:48.879    "ublk_recover_disk",
00:09:48.879    "ublk_get_disks",
00:09:48.879    "ublk_stop_disk",
00:09:48.879    "ublk_start_disk",
00:09:48.879    "ublk_destroy_target",
00:09:48.879    "ublk_create_target",
00:09:48.879    "nbd_get_disks",
00:09:48.879    "nbd_stop_disk",
00:09:48.879    "nbd_start_disk",
00:09:48.879    "env_dpdk_get_mem_stats",
00:09:48.879    "nvmf_stop_mdns_prr",
00:09:48.879    "nvmf_publish_mdns_prr",
00:09:48.879    "nvmf_subsystem_get_listeners",
00:09:48.879    "nvmf_subsystem_get_qpairs",
00:09:48.879    "nvmf_subsystem_get_controllers",
00:09:48.879    "nvmf_get_stats",
00:09:48.879    "nvmf_get_transports",
00:09:48.879    "nvmf_create_transport",
00:09:48.879    "nvmf_get_targets",
00:09:48.879    "nvmf_delete_target",
00:09:48.879    "nvmf_create_target",
00:09:48.879    "nvmf_subsystem_allow_any_host",
00:09:48.879    "nvmf_subsystem_set_keys",
00:09:48.879    "nvmf_subsystem_remove_host",
00:09:48.879    "nvmf_subsystem_add_host",
00:09:48.879    "nvmf_ns_remove_host",
00:09:48.879    "nvmf_ns_add_host",
00:09:48.879    "nvmf_subsystem_remove_ns",
00:09:48.879    "nvmf_subsystem_set_ns_ana_group",
00:09:48.879    "nvmf_subsystem_add_ns",
00:09:48.879    "nvmf_subsystem_listener_set_ana_state",
00:09:48.879    "nvmf_discovery_get_referrals",
00:09:48.879    "nvmf_discovery_remove_referral",
00:09:48.879    "nvmf_discovery_add_referral",
00:09:48.879    "nvmf_subsystem_remove_listener",
00:09:48.879    "nvmf_subsystem_add_listener",
00:09:48.879    "nvmf_delete_subsystem",
00:09:48.879    "nvmf_create_subsystem",
00:09:48.879    "nvmf_get_subsystems",
00:09:48.879    "nvmf_set_crdt",
00:09:48.879    "nvmf_set_config",
00:09:48.879    "nvmf_set_max_subsystems",
00:09:48.879    "iscsi_get_histogram",
00:09:48.879    "iscsi_enable_histogram",
00:09:48.879    "iscsi_set_options",
00:09:48.879    "iscsi_get_auth_groups",
00:09:48.879    "iscsi_auth_group_remove_secret",
00:09:48.879    "iscsi_auth_group_add_secret",
00:09:48.879    "iscsi_delete_auth_group",
00:09:48.879    "iscsi_create_auth_group",
00:09:48.879    "iscsi_set_discovery_auth",
00:09:48.879    "iscsi_get_options",
00:09:48.879    "iscsi_target_node_request_logout",
00:09:48.879    "iscsi_target_node_set_redirect",
00:09:48.879    "iscsi_target_node_set_auth",
00:09:48.879    "iscsi_target_node_add_lun",
00:09:48.879    "iscsi_get_stats",
00:09:48.879    "iscsi_get_connections",
00:09:48.879    "iscsi_portal_group_set_auth",
00:09:48.880    "iscsi_start_portal_group",
00:09:48.880    "iscsi_delete_portal_group",
00:09:48.880    "iscsi_create_portal_group",
00:09:48.880    "iscsi_get_portal_groups",
00:09:48.880    "iscsi_delete_target_node",
00:09:48.880    "iscsi_target_node_remove_pg_ig_maps",
00:09:48.880    "iscsi_target_node_add_pg_ig_maps",
00:09:48.880    "iscsi_create_target_node",
00:09:48.880    "iscsi_get_target_nodes",
00:09:48.880    "iscsi_delete_initiator_group",
00:09:48.880    "iscsi_initiator_group_remove_initiators",
00:09:48.880    "iscsi_initiator_group_add_initiators",
00:09:48.880    "iscsi_create_initiator_group",
00:09:48.880    "iscsi_get_initiator_groups",
00:09:48.880    "fsdev_aio_delete",
00:09:48.880    "fsdev_aio_create",
00:09:48.880    "keyring_linux_set_options",
00:09:48.880    "keyring_file_remove_key",
00:09:48.880    "keyring_file_add_key",
00:09:48.880    "iaa_scan_accel_module",
00:09:48.880    "dsa_scan_accel_module",
00:09:48.880    "ioat_scan_accel_module",
00:09:48.880    "accel_error_inject_error",
00:09:48.880    "bdev_iscsi_delete",
00:09:48.880    "bdev_iscsi_create",
00:09:48.880    "bdev_iscsi_set_options",
00:09:48.880    "bdev_virtio_attach_controller",
00:09:48.880    "bdev_virtio_scsi_get_devices",
00:09:48.880    "bdev_virtio_detach_controller",
00:09:48.880    "bdev_virtio_blk_set_hotplug",
00:09:48.880    "bdev_ftl_set_property",
00:09:48.880    "bdev_ftl_get_properties",
00:09:48.880    "bdev_ftl_get_stats",
00:09:48.880    "bdev_ftl_unmap",
00:09:48.880    "bdev_ftl_unload",
00:09:48.880    "bdev_ftl_delete",
00:09:48.880    "bdev_ftl_load",
00:09:48.880    "bdev_ftl_create",
00:09:48.880    "bdev_aio_delete",
00:09:48.880    "bdev_aio_rescan",
00:09:48.880    "bdev_aio_create",
00:09:48.880    "blobfs_create",
00:09:48.880    "blobfs_detect",
00:09:48.880    "blobfs_set_cache_size",
00:09:48.880    "bdev_zone_block_delete",
00:09:48.880    "bdev_zone_block_create",
00:09:48.880    "bdev_delay_delete",
00:09:48.880    "bdev_delay_create",
00:09:48.880    "bdev_delay_update_latency",
00:09:48.880    "bdev_split_delete",
00:09:48.880    "bdev_split_create",
00:09:48.880    "bdev_error_inject_error",
00:09:48.880    "bdev_error_delete",
00:09:48.880    "bdev_error_create",
00:09:48.880    "bdev_raid_set_options",
00:09:48.880    "bdev_raid_remove_base_bdev",
00:09:48.880    "bdev_raid_add_base_bdev",
00:09:48.880    "bdev_raid_delete",
00:09:48.880    "bdev_raid_create",
00:09:48.880    "bdev_raid_get_bdevs",
00:09:48.880    "bdev_lvol_set_parent_bdev",
00:09:48.880    "bdev_lvol_set_parent",
00:09:48.880    "bdev_lvol_check_shallow_copy",
00:09:48.880    "bdev_lvol_start_shallow_copy",
00:09:48.880    "bdev_lvol_grow_lvstore",
00:09:48.880    "bdev_lvol_get_lvols",
00:09:48.880    "bdev_lvol_get_lvstores",
00:09:48.880    "bdev_lvol_delete",
00:09:48.880    "bdev_lvol_set_read_only",
00:09:48.880    "bdev_lvol_resize",
00:09:48.880    "bdev_lvol_decouple_parent",
00:09:48.880    "bdev_lvol_inflate",
00:09:48.880    "bdev_lvol_rename",
00:09:48.880    "bdev_lvol_clone_bdev",
00:09:48.880    "bdev_lvol_clone",
00:09:48.880    "bdev_lvol_snapshot",
00:09:48.880    "bdev_lvol_create",
00:09:48.880    "bdev_lvol_delete_lvstore",
00:09:48.880    "bdev_lvol_rename_lvstore",
00:09:48.880    "bdev_lvol_create_lvstore",
00:09:48.880    "bdev_passthru_delete",
00:09:48.880    "bdev_passthru_create",
00:09:48.880    "bdev_nvme_cuse_unregister",
00:09:48.880    "bdev_nvme_cuse_register",
00:09:48.880    "bdev_opal_new_user",
00:09:48.880    "bdev_opal_set_lock_state",
00:09:48.880    "bdev_opal_delete",
00:09:48.880    "bdev_opal_get_info",
00:09:48.880    "bdev_opal_create",
00:09:48.880    "bdev_nvme_opal_revert",
00:09:48.880    "bdev_nvme_opal_init",
00:09:48.880    "bdev_nvme_send_cmd",
00:09:48.880    "bdev_nvme_set_keys",
00:09:48.880    "bdev_nvme_get_path_iostat",
00:09:48.880    "bdev_nvme_get_mdns_discovery_info",
00:09:48.880    "bdev_nvme_stop_mdns_discovery",
00:09:48.880    "bdev_nvme_start_mdns_discovery",
00:09:48.880    "bdev_nvme_set_multipath_policy",
00:09:48.880    "bdev_nvme_set_preferred_path",
00:09:48.880    "bdev_nvme_get_io_paths",
00:09:48.880    "bdev_nvme_remove_error_injection",
00:09:48.880    "bdev_nvme_add_error_injection",
00:09:48.880    "bdev_nvme_get_discovery_info",
00:09:48.880    "bdev_nvme_stop_discovery",
00:09:48.880    "bdev_nvme_start_discovery",
00:09:48.880    "bdev_nvme_get_controller_health_info",
00:09:48.880    "bdev_nvme_disable_controller",
00:09:48.880    "bdev_nvme_enable_controller",
00:09:48.880    "bdev_nvme_reset_controller",
00:09:48.880    "bdev_nvme_get_transport_statistics",
00:09:48.880    "bdev_nvme_apply_firmware",
00:09:48.880    "bdev_nvme_detach_controller",
00:09:48.880    "bdev_nvme_get_controllers",
00:09:48.880    "bdev_nvme_attach_controller",
00:09:48.880    "bdev_nvme_set_hotplug",
00:09:48.880    "bdev_nvme_set_options",
00:09:48.880    "bdev_null_resize",
00:09:48.880    "bdev_null_delete",
00:09:48.880    "bdev_null_create",
00:09:48.880    "bdev_malloc_delete",
00:09:48.880    "bdev_malloc_create"
00:09:48.880  ]
00:09:48.880   13:46:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:09:48.880   13:46:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:48.880   13:46:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:49.138   13:46:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:49.138   13:46:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69903
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69903 ']'
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69903
00:09:49.138    13:46:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:49.138    13:46:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69903
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:49.138  killing process with pid 69903
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69903'
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69903
00:09:49.138   13:46:31 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69903
00:09:52.423  
00:09:52.423  real	0m5.314s
00:09:52.423  user	0m9.498s
00:09:52.423  sys	0m1.006s
00:09:52.423   13:46:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:52.423  ************************************
00:09:52.423  END TEST spdkcli_tcp
00:09:52.423  ************************************
00:09:52.423   13:46:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:52.423   13:46:34  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:52.423   13:46:34  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:52.423   13:46:34  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:52.424   13:46:34  -- common/autotest_common.sh@10 -- # set +x
00:09:52.424  ************************************
00:09:52.424  START TEST dpdk_mem_utility
00:09:52.424  ************************************
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:52.424  * Looking for test storage...
00:09:52.424  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:52.424     13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:09:52.424     13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:52.424     13:46:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:52.424    13:46:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:52.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:52.424  		--rc genhtml_branch_coverage=1
00:09:52.424  		--rc genhtml_function_coverage=1
00:09:52.424  		--rc genhtml_legend=1
00:09:52.424  		--rc geninfo_all_blocks=1
00:09:52.424  		--rc geninfo_unexecuted_blocks=1
00:09:52.424  		
00:09:52.424  		'
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:52.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:52.424  		--rc genhtml_branch_coverage=1
00:09:52.424  		--rc genhtml_function_coverage=1
00:09:52.424  		--rc genhtml_legend=1
00:09:52.424  		--rc geninfo_all_blocks=1
00:09:52.424  		--rc geninfo_unexecuted_blocks=1
00:09:52.424  		
00:09:52.424  		'
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:52.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:52.424  		--rc genhtml_branch_coverage=1
00:09:52.424  		--rc genhtml_function_coverage=1
00:09:52.424  		--rc genhtml_legend=1
00:09:52.424  		--rc geninfo_all_blocks=1
00:09:52.424  		--rc geninfo_unexecuted_blocks=1
00:09:52.424  		
00:09:52.424  		'
00:09:52.424    13:46:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:52.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:52.424  		--rc genhtml_branch_coverage=1
00:09:52.424  		--rc genhtml_function_coverage=1
00:09:52.424  		--rc genhtml_legend=1
00:09:52.424  		--rc geninfo_all_blocks=1
00:09:52.424  		--rc geninfo_unexecuted_blocks=1
00:09:52.424  		
00:09:52.424  		'
00:09:52.424   13:46:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:52.424   13:46:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70036
00:09:52.424   13:46:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:52.424   13:46:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70036
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70036 ']'
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:52.424  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:52.424   13:46:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:52.424  [2024-12-11 13:46:35.043895] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:52.424  [2024-12-11 13:46:35.044081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70036 ]
00:09:52.682  [2024-12-11 13:46:35.240416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:52.682  [2024-12-11 13:46:35.412383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:54.058   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:54.058   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:09:54.058   13:46:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:09:54.058   13:46:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:09:54.058   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:54.058   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:54.058  {
00:09:54.058  "filename": "/tmp/spdk_mem_dump.txt"
00:09:54.058  }
00:09:54.058   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:54.058   13:46:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:54.058  DPDK memory size 824.000000 MiB in 1 heap(s)
00:09:54.058  1 heaps totaling size 824.000000 MiB
00:09:54.058    size:  824.000000 MiB heap id: 0
00:09:54.058  end heaps----------
00:09:54.058  9 mempools totaling size 603.782043 MiB
00:09:54.058    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:09:54.058    size:  158.602051 MiB name: PDU_data_out_Pool
00:09:54.058    size:  100.555481 MiB name: bdev_io_70036
00:09:54.058    size:   50.003479 MiB name: msgpool_70036
00:09:54.058    size:   36.509338 MiB name: fsdev_io_70036
00:09:54.058    size:   21.763794 MiB name: PDU_Pool
00:09:54.058    size:   19.513306 MiB name: SCSI_TASK_Pool
00:09:54.058    size:    4.133484 MiB name: evtpool_70036
00:09:54.058    size:    0.026123 MiB name: Session_Pool
00:09:54.058  end mempools-------
00:09:54.058  6 memzones totaling size 4.142822 MiB
00:09:54.058    size:    1.000366 MiB name: RG_ring_0_70036
00:09:54.058    size:    1.000366 MiB name: RG_ring_1_70036
00:09:54.058    size:    1.000366 MiB name: RG_ring_4_70036
00:09:54.058    size:    1.000366 MiB name: RG_ring_5_70036
00:09:54.058    size:    0.125366 MiB name: RG_ring_2_70036
00:09:54.058    size:    0.015991 MiB name: RG_ring_3_70036
00:09:54.058  end memzones-------
00:09:54.058   13:46:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:09:54.318  heap id: 0 total size: 824.000000 MiB number of busy elements: 295 number of free elements: 18
00:09:54.318    list of free elements. size: 16.786377 MiB
00:09:54.318      element at address: 0x200003e00000 with size:    1.995972 MiB
00:09:54.318      element at address: 0x200008000000 with size:    1.995972 MiB
00:09:54.318      element at address: 0x200010600000 with size:    1.991028 MiB
00:09:54.318      element at address: 0x200019500040 with size:    0.999939 MiB
00:09:54.318      element at address: 0x200019900040 with size:    0.999939 MiB
00:09:54.318      element at address: 0x200019a00000 with size:    0.999084 MiB
00:09:54.318      element at address: 0x200032600000 with size:    0.994324 MiB
00:09:54.318      element at address: 0x200000400000 with size:    0.992004 MiB
00:09:54.318      element at address: 0x200019200000 with size:    0.959656 MiB
00:09:54.318      element at address: 0x200019d00040 with size:    0.936401 MiB
00:09:54.318      element at address: 0x200000200000 with size:    0.716980 MiB
00:09:54.318      element at address: 0x20001b400000 with size:    0.567078 MiB
00:09:54.318      element at address: 0x200019600000 with size:    0.488708 MiB
00:09:54.318      element at address: 0x200019e00000 with size:    0.485413 MiB
00:09:54.318      element at address: 0x200000c00000 with size:    0.483826 MiB
00:09:54.318      element at address: 0x200012c00000 with size:    0.433228 MiB
00:09:54.318      element at address: 0x200028800000 with size:    0.390442 MiB
00:09:54.318      element at address: 0x200000800000 with size:    0.356384 MiB
00:09:54.318    list of standard malloc elements. size: 199.282715 MiB
00:09:54.318      element at address: 0x2000081fef80 with size:  132.000183 MiB
00:09:54.318      element at address: 0x200003ffef80 with size:   64.000183 MiB
00:09:54.318      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:09:54.318      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:09:54.318      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:09:54.318      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:09:54.318      element at address: 0x200019deff40 with size:    0.062683 MiB
00:09:54.318      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:09:54.318      element at address: 0x200019defdc0 with size:    0.000366 MiB
00:09:54.318      element at address: 0x200007fff040 with size:    0.000305 MiB
00:09:54.318      element at address: 0x2000105ff040 with size:    0.000305 MiB
00:09:54.318      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fdf40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe040 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe140 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe240 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe340 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe440 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe540 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe640 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe740 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe840 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fe940 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fea40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004feb40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fec40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fed40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fee40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004fef40 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff040 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff140 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff240 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff340 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff440 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff540 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff640 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff740 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ffbc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7bdc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7bec0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7bfc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c0c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c1c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c2c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c3c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c4c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c5c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c6c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c7c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c8c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7c9c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7cac0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7cbc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7ccc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7cdc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7cec0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7cfc0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d0c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d1c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d2c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d3c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d4c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d5c0 with size:    0.000244 MiB
00:09:54.318      element at address: 0x200000c7d6c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7d7c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7d8c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7d9c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7dac0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7dbc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7dcc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7ddc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7dec0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7dfc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e0c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e1c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e2c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e3c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e4c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e5c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e6c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e7c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e8c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7e9c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7eac0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000c7ebc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200000cff000 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff180 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff280 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff380 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff480 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff700 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff800 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fff900 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fffa00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fffb00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fffc00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fffd00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007fffe00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200007ffff00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff180 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff280 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff380 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff480 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff580 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff680 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff780 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff880 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ff980 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ffa80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ffb80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105ffc80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000105fff00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6ee80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6ef80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f080 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f180 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f280 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f380 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f480 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f580 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f680 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f780 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012c6f880 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200012cefbc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000192fdd00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d1c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d2c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d3c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d4c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d5c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d6c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d7c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d8c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001967d9c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x2000196fdd00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200019affc40 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200019defbc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200019defcc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200019ebc680 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4912c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4913c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4914c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4915c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4916c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4917c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4918c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4919c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b491ac0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b491bc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b491cc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b491dc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b491ec0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b491fc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4920c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4921c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4922c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4923c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4924c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4925c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4926c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4927c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4928c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4929c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b492ac0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b492bc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b492cc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b492dc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b492ec0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b492fc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4930c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4931c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4932c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4933c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4934c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4935c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4936c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4937c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4938c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4939c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b493ac0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b493bc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b493cc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b493dc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b493ec0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b493fc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4940c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4941c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4942c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4943c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4944c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4945c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4946c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4947c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4948c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4949c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b494ac0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b494bc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b494cc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b494dc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b494ec0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b494fc0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4950c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4951c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4952c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20001b4953c0 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200028863f40 with size:    0.000244 MiB
00:09:54.319      element at address: 0x200028864040 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886ad00 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886af80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b080 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b180 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b280 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b380 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b480 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b580 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b680 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b780 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b880 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886b980 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886ba80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886bb80 with size:    0.000244 MiB
00:09:54.319      element at address: 0x20002886bc80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886bd80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886be80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886bf80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c080 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c180 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c280 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c380 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c480 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c580 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c680 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c780 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c880 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886c980 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ca80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886cb80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886cc80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886cd80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ce80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886cf80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d080 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d180 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d280 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d380 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d480 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d580 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d680 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d780 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d880 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886d980 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886da80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886db80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886dc80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886dd80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886de80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886df80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e080 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e180 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e280 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e380 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e480 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e580 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e680 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e780 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e880 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886e980 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ea80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886eb80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ec80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ed80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ee80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886ef80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f080 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f180 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f280 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f380 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f480 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f580 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f680 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f780 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f880 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886f980 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886fa80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886fb80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886fc80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886fd80 with size:    0.000244 MiB
00:09:54.320      element at address: 0x20002886fe80 with size:    0.000244 MiB
00:09:54.320    list of memzone associated elements. size: 607.930908 MiB
00:09:54.320      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:09:54.320        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:09:54.320      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:09:54.320        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:09:54.320      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:09:54.320        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_70036_0
00:09:54.320      element at address: 0x200000dff340 with size:   48.003113 MiB
00:09:54.320        associated memzone info: size:   48.002930 MiB name: MP_msgpool_70036_0
00:09:54.320      element at address: 0x2000107fdb40 with size:   36.008972 MiB
00:09:54.320        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_70036_0
00:09:54.320      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:09:54.320        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:09:54.320      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:09:54.320        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:09:54.320      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:09:54.320        associated memzone info: size:    3.000122 MiB name: MP_evtpool_70036_0
00:09:54.320      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:09:54.320        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_70036
00:09:54.320      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:09:54.320        associated memzone info: size:    1.007996 MiB name: MP_evtpool_70036
00:09:54.320      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:09:54.320        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:09:54.320      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:09:54.320        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:09:54.320      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:09:54.320        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:09:54.320      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:09:54.320        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:09:54.320      element at address: 0x200000cff100 with size:    1.000549 MiB
00:09:54.320        associated memzone info: size:    1.000366 MiB name: RG_ring_0_70036
00:09:54.320      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:09:54.320        associated memzone info: size:    1.000366 MiB name: RG_ring_1_70036
00:09:54.320      element at address: 0x200019affd40 with size:    1.000549 MiB
00:09:54.320        associated memzone info: size:    1.000366 MiB name: RG_ring_4_70036
00:09:54.320      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:09:54.320        associated memzone info: size:    1.000366 MiB name: RG_ring_5_70036
00:09:54.320      element at address: 0x20000085b3c0 with size:    0.500549 MiB
00:09:54.320        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_70036
00:09:54.320      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:09:54.320        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_70036
00:09:54.320      element at address: 0x20001967dac0 with size:    0.500549 MiB
00:09:54.320        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:09:54.320      element at address: 0x200012c6f980 with size:    0.500549 MiB
00:09:54.320        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:09:54.320      element at address: 0x200019e7c440 with size:    0.250549 MiB
00:09:54.320        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:09:54.320      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:09:54.320        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_70036
00:09:54.320      element at address: 0x2000008df840 with size:    0.125549 MiB
00:09:54.320        associated memzone info: size:    0.125366 MiB name: RG_ring_2_70036
00:09:54.320      element at address: 0x2000192f5ac0 with size:    0.031799 MiB
00:09:54.320        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:09:54.320      element at address: 0x200028864140 with size:    0.023804 MiB
00:09:54.320        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:09:54.320      element at address: 0x2000008db600 with size:    0.016174 MiB
00:09:54.320        associated memzone info: size:    0.015991 MiB name: RG_ring_3_70036
00:09:54.320      element at address: 0x20002886a2c0 with size:    0.002502 MiB
00:09:54.320        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:09:54.320      element at address: 0x2000004ffa40 with size:    0.000366 MiB
00:09:54.320        associated memzone info: size:    0.000183 MiB name: MP_msgpool_70036
00:09:54.320      element at address: 0x2000105ffd80 with size:    0.000366 MiB
00:09:54.320        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_70036
00:09:54.320      element at address: 0x200007fff580 with size:    0.000366 MiB
00:09:54.320        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_70036
00:09:54.320      element at address: 0x20002886ae00 with size:    0.000366 MiB
00:09:54.320        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:09:54.320   13:46:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:09:54.320   13:46:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70036
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70036 ']'
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70036
00:09:54.320    13:46:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:54.320    13:46:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70036
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:54.320  killing process with pid 70036
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70036'
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70036
00:09:54.320   13:46:36 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70036
00:09:57.646  
00:09:57.646  real	0m5.231s
00:09:57.646  user	0m4.994s
00:09:57.646  sys	0m0.904s
00:09:57.646   13:46:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:57.646   13:46:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:57.646  ************************************
00:09:57.646  END TEST dpdk_mem_utility
00:09:57.646  ************************************
00:09:57.646   13:46:40  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:57.646   13:46:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:57.646   13:46:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:57.646   13:46:40  -- common/autotest_common.sh@10 -- # set +x
00:09:57.646  ************************************
00:09:57.646  START TEST event
00:09:57.646  ************************************
00:09:57.646   13:46:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:57.646  * Looking for test storage...
00:09:57.646  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:09:57.646    13:46:40 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:57.646     13:46:40 event -- common/autotest_common.sh@1711 -- # lcov --version
00:09:57.646     13:46:40 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:57.646    13:46:40 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:57.646    13:46:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:57.646    13:46:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:57.646    13:46:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:57.646    13:46:40 event -- scripts/common.sh@336 -- # IFS=.-:
00:09:57.646    13:46:40 event -- scripts/common.sh@336 -- # read -ra ver1
00:09:57.646    13:46:40 event -- scripts/common.sh@337 -- # IFS=.-:
00:09:57.646    13:46:40 event -- scripts/common.sh@337 -- # read -ra ver2
00:09:57.646    13:46:40 event -- scripts/common.sh@338 -- # local 'op=<'
00:09:57.646    13:46:40 event -- scripts/common.sh@340 -- # ver1_l=2
00:09:57.646    13:46:40 event -- scripts/common.sh@341 -- # ver2_l=1
00:09:57.646    13:46:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:57.646    13:46:40 event -- scripts/common.sh@344 -- # case "$op" in
00:09:57.646    13:46:40 event -- scripts/common.sh@345 -- # : 1
00:09:57.646    13:46:40 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:57.646    13:46:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:57.646     13:46:40 event -- scripts/common.sh@365 -- # decimal 1
00:09:57.646     13:46:40 event -- scripts/common.sh@353 -- # local d=1
00:09:57.646     13:46:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:57.646     13:46:40 event -- scripts/common.sh@355 -- # echo 1
00:09:57.646    13:46:40 event -- scripts/common.sh@365 -- # ver1[v]=1
00:09:57.646     13:46:40 event -- scripts/common.sh@366 -- # decimal 2
00:09:57.646     13:46:40 event -- scripts/common.sh@353 -- # local d=2
00:09:57.646     13:46:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:57.646     13:46:40 event -- scripts/common.sh@355 -- # echo 2
00:09:57.646    13:46:40 event -- scripts/common.sh@366 -- # ver2[v]=2
00:09:57.646    13:46:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:57.646    13:46:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:57.646    13:46:40 event -- scripts/common.sh@368 -- # return 0
00:09:57.646    13:46:40 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:57.646    13:46:40 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:57.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.646  		--rc genhtml_branch_coverage=1
00:09:57.646  		--rc genhtml_function_coverage=1
00:09:57.646  		--rc genhtml_legend=1
00:09:57.646  		--rc geninfo_all_blocks=1
00:09:57.646  		--rc geninfo_unexecuted_blocks=1
00:09:57.646  		
00:09:57.646  		'
00:09:57.646    13:46:40 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:57.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.646  		--rc genhtml_branch_coverage=1
00:09:57.646  		--rc genhtml_function_coverage=1
00:09:57.646  		--rc genhtml_legend=1
00:09:57.646  		--rc geninfo_all_blocks=1
00:09:57.647  		--rc geninfo_unexecuted_blocks=1
00:09:57.647  		
00:09:57.647  		'
00:09:57.647    13:46:40 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:57.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.647  		--rc genhtml_branch_coverage=1
00:09:57.647  		--rc genhtml_function_coverage=1
00:09:57.647  		--rc genhtml_legend=1
00:09:57.647  		--rc geninfo_all_blocks=1
00:09:57.647  		--rc geninfo_unexecuted_blocks=1
00:09:57.647  		
00:09:57.647  		'
00:09:57.647    13:46:40 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:57.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:57.647  		--rc genhtml_branch_coverage=1
00:09:57.647  		--rc genhtml_function_coverage=1
00:09:57.647  		--rc genhtml_legend=1
00:09:57.647  		--rc geninfo_all_blocks=1
00:09:57.647  		--rc geninfo_unexecuted_blocks=1
00:09:57.647  		
00:09:57.647  		'
00:09:57.647   13:46:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:09:57.647    13:46:40 event -- bdev/nbd_common.sh@6 -- # set -e
00:09:57.647   13:46:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:57.647   13:46:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:09:57.647   13:46:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:57.647   13:46:40 event -- common/autotest_common.sh@10 -- # set +x
00:09:57.647  ************************************
00:09:57.647  START TEST event_perf
00:09:57.647  ************************************
00:09:57.647   13:46:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:57.647  Running I/O for 1 seconds...[2024-12-11 13:46:40.322051] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:57.647  [2024-12-11 13:46:40.322227] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70155 ]
00:09:57.906  [2024-12-11 13:46:40.516774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:57.906  [2024-12-11 13:46:40.653495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:57.906  [2024-12-11 13:46:40.653709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:09:57.906  [2024-12-11 13:46:40.653802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:57.906  [2024-12-11 13:46:40.653831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:09:59.281  Running I/O for 1 seconds...
00:09:59.281  lcore  0:   190040
00:09:59.281  lcore  1:   190039
00:09:59.281  lcore  2:   190040
00:09:59.281  lcore  3:   190039
00:09:59.281  done.
00:09:59.281  
00:09:59.281  real	0m1.642s
00:09:59.281  user	0m4.403s
00:09:59.281  sys	0m0.136s
00:09:59.281   13:46:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.281   13:46:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:09:59.281  ************************************
00:09:59.281  END TEST event_perf
00:09:59.281  ************************************
00:09:59.281   13:46:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:59.281   13:46:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:59.281   13:46:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:59.281   13:46:41 event -- common/autotest_common.sh@10 -- # set +x
00:09:59.281  ************************************
00:09:59.281  START TEST event_reactor
00:09:59.281  ************************************
00:09:59.281   13:46:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:59.281  [2024-12-11 13:46:42.030380] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:09:59.281  [2024-12-11 13:46:42.030552] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70195 ]
00:09:59.540  [2024-12-11 13:46:42.221918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:59.798  [2024-12-11 13:46:42.343811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:01.174  test_start
00:10:01.174  oneshot
00:10:01.174  tick 100
00:10:01.174  tick 100
00:10:01.174  tick 250
00:10:01.174  tick 100
00:10:01.174  tick 100
00:10:01.174  tick 100
00:10:01.174  tick 250
00:10:01.174  tick 500
00:10:01.174  tick 100
00:10:01.174  tick 100
00:10:01.174  tick 250
00:10:01.174  tick 100
00:10:01.174  tick 100
00:10:01.174  test_end
00:10:01.174  
00:10:01.174  real	0m1.612s
00:10:01.174  user	0m1.405s
00:10:01.174  sys	0m0.107s
00:10:01.174   13:46:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:01.174   13:46:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:10:01.174  ************************************
00:10:01.174  END TEST event_reactor
00:10:01.174  ************************************
00:10:01.174   13:46:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:10:01.174   13:46:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:10:01.174   13:46:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:01.174   13:46:43 event -- common/autotest_common.sh@10 -- # set +x
00:10:01.174  ************************************
00:10:01.174  START TEST event_reactor_perf
00:10:01.174  ************************************
00:10:01.174   13:46:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:10:01.174  [2024-12-11 13:46:43.707238] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:01.174  [2024-12-11 13:46:43.707425] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70236 ]
00:10:01.175  [2024-12-11 13:46:43.908651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:01.433  [2024-12-11 13:46:44.035547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:02.808  test_start
00:10:02.808  test_end
00:10:02.808  Performance:   352652 events per second
00:10:02.808  
00:10:02.808  real	0m1.643s
00:10:02.808  user	0m1.417s
00:10:02.808  sys	0m0.125s
00:10:02.808   13:46:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:02.808   13:46:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:10:02.808  ************************************
00:10:02.808  END TEST event_reactor_perf
00:10:02.808  ************************************
00:10:02.808    13:46:45 event -- event/event.sh@49 -- # uname -s
00:10:02.808   13:46:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:10:02.808   13:46:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:10:02.808   13:46:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:02.808   13:46:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:02.808   13:46:45 event -- common/autotest_common.sh@10 -- # set +x
00:10:02.808  ************************************
00:10:02.808  START TEST event_scheduler
00:10:02.808  ************************************
00:10:02.808   13:46:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:10:02.808  * Looking for test storage...
00:10:02.808  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:10:02.808    13:46:45 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:02.808     13:46:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:10:02.808     13:46:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:02.808    13:46:45 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:02.808     13:46:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:02.808    13:46:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:10:02.809    13:46:45 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:02.809    13:46:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:02.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:02.809  		--rc genhtml_branch_coverage=1
00:10:02.809  		--rc genhtml_function_coverage=1
00:10:02.809  		--rc genhtml_legend=1
00:10:02.809  		--rc geninfo_all_blocks=1
00:10:02.809  		--rc geninfo_unexecuted_blocks=1
00:10:02.809  		
00:10:02.809  		'
00:10:02.809    13:46:45 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:02.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:02.809  		--rc genhtml_branch_coverage=1
00:10:02.809  		--rc genhtml_function_coverage=1
00:10:02.809  		--rc genhtml_legend=1
00:10:02.809  		--rc geninfo_all_blocks=1
00:10:02.809  		--rc geninfo_unexecuted_blocks=1
00:10:02.809  		
00:10:02.809  		'
00:10:02.809    13:46:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:02.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:02.809  		--rc genhtml_branch_coverage=1
00:10:02.809  		--rc genhtml_function_coverage=1
00:10:02.809  		--rc genhtml_legend=1
00:10:02.809  		--rc geninfo_all_blocks=1
00:10:02.809  		--rc geninfo_unexecuted_blocks=1
00:10:02.809  		
00:10:02.809  		'
00:10:02.809    13:46:45 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:02.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:02.809  		--rc genhtml_branch_coverage=1
00:10:02.809  		--rc genhtml_function_coverage=1
00:10:02.809  		--rc genhtml_legend=1
00:10:02.809  		--rc geninfo_all_blocks=1
00:10:02.809  		--rc geninfo_unexecuted_blocks=1
00:10:02.809  		
00:10:02.809  		'
00:10:02.809   13:46:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:10:02.809   13:46:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70302
00:10:02.809   13:46:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:10:02.809   13:46:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:10:02.809   13:46:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70302
00:10:02.809   13:46:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70302 ']'
00:10:02.809   13:46:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:02.809   13:46:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:02.809  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:02.809   13:46:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:02.809   13:46:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:02.809   13:46:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:10:03.067  [2024-12-11 13:46:45.656941] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:03.067  [2024-12-11 13:46:45.657975] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70302 ]
00:10:03.329  [2024-12-11 13:46:45.861444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:03.329  [2024-12-11 13:46:45.997895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:03.329  [2024-12-11 13:46:45.997999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:03.329  [2024-12-11 13:46:45.998165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:10:03.329  [2024-12-11 13:46:45.998191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:10:03.896   13:46:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:10:03.896  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:10:03.896  POWER: Cannot set governor of lcore 0 to userspace
00:10:03.896  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:10:03.896  POWER: Cannot set governor of lcore 0 to performance
00:10:03.896  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:10:03.896  POWER: Cannot set governor of lcore 0 to userspace
00:10:03.896  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:10:03.896  POWER: Cannot set governor of lcore 0 to userspace
00:10:03.896  GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0
00:10:03.896  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:10:03.896  POWER: Unable to set Power Management Environment for lcore 0
00:10:03.896  [2024-12-11 13:46:46.656167] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:10:03.896  [2024-12-11 13:46:46.656190] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:10:03.896  [2024-12-11 13:46:46.656202] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:10:03.896  [2024-12-11 13:46:46.656226] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:10:03.896  [2024-12-11 13:46:46.656236] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:10:03.896  [2024-12-11 13:46:46.656249] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:03.896   13:46:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:03.896   13:46:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:10:04.465  [2024-12-11 13:46:47.040422] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:10:04.465   13:46:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.465   13:46:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:10:04.465   13:46:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:04.465   13:46:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:04.465   13:46:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  ************************************
00:10:04.466  START TEST scheduler_create_thread
00:10:04.466  ************************************
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  2
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  3
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  4
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  5
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  6
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  7
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  8
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  9
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466  10
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:04.466   13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:04.466    13:46:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:05.403    13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:05.403   13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:10:05.404   13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:10:05.404   13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:05.404   13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:06.782   13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:06.782  
00:10:06.782  real	0m2.141s
00:10:06.782  user	0m0.024s
00:10:06.782  sys	0m0.006s
00:10:06.782   13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:06.782   13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:10:06.782  ************************************
00:10:06.782  END TEST scheduler_create_thread
00:10:06.782  ************************************
00:10:06.782   13:46:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:10:06.782   13:46:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70302
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70302 ']'
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70302
00:10:06.782    13:46:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:06.782    13:46:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70302
00:10:06.782  killing process with pid 70302
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70302'
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70302
00:10:06.782   13:46:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70302
00:10:07.041  [2024-12-11 13:46:49.674303] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:10:08.418  ************************************
00:10:08.418  END TEST event_scheduler
00:10:08.418  ************************************
00:10:08.418  
00:10:08.418  real	0m5.490s
00:10:08.418  user	0m9.401s
00:10:08.418  sys	0m0.610s
00:10:08.418   13:46:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:08.418   13:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:10:08.418   13:46:50 event -- event/event.sh@51 -- # modprobe -n nbd
00:10:08.418   13:46:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:10:08.418   13:46:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:08.418   13:46:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:08.418   13:46:50 event -- common/autotest_common.sh@10 -- # set +x
00:10:08.418  ************************************
00:10:08.418  START TEST app_repeat
00:10:08.418  ************************************
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70414
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:10:08.418  Process app_repeat pid: 70414
00:10:08.418  spdk_app_start Round 0
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70414'
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:10:08.418   13:46:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70414 /var/tmp/spdk-nbd.sock
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70414 ']'
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:08.418  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:08.418   13:46:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:08.418  [2024-12-11 13:46:50.974459] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:08.418  [2024-12-11 13:46:50.974629] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70414 ]
00:10:08.418  [2024-12-11 13:46:51.167899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:08.677  [2024-12-11 13:46:51.297606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.677  [2024-12-11 13:46:51.297673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:09.244   13:46:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:09.244   13:46:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:09.244   13:46:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:09.504  Malloc0
00:10:09.504   13:46:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:09.762  Malloc1
00:10:09.762   13:46:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:09.762   13:46:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:10:10.021  /dev/nbd0
00:10:10.021    13:46:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:10.021   13:46:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:10.021  1+0 records in
00:10:10.021  1+0 records out
00:10:10.021  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209756 s, 19.5 MB/s
00:10:10.021    13:46:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:10.021   13:46:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:10.021   13:46:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:10.021   13:46:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:10.021   13:46:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:10:10.280  /dev/nbd1
00:10:10.280    13:46:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:10.280   13:46:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:10.280  1+0 records in
00:10:10.280  1+0 records out
00:10:10.280  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298588 s, 13.7 MB/s
00:10:10.280    13:46:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:10.280   13:46:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:10.281   13:46:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:10.281   13:46:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:10.281   13:46:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:10.281    13:46:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:10.281    13:46:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:10.281     13:46:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:10.539    13:46:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:10.539    {
00:10:10.539      "nbd_device": "/dev/nbd0",
00:10:10.539      "bdev_name": "Malloc0"
00:10:10.539    },
00:10:10.539    {
00:10:10.539      "nbd_device": "/dev/nbd1",
00:10:10.539      "bdev_name": "Malloc1"
00:10:10.539    }
00:10:10.539  ]'
00:10:10.539     13:46:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:10:10.539    {
00:10:10.539      "nbd_device": "/dev/nbd0",
00:10:10.539      "bdev_name": "Malloc0"
00:10:10.539    },
00:10:10.539    {
00:10:10.539      "nbd_device": "/dev/nbd1",
00:10:10.539      "bdev_name": "Malloc1"
00:10:10.539    }
00:10:10.539  ]'
00:10:10.539     13:46:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:10.539    13:46:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:10.539  /dev/nbd1'
00:10:10.539     13:46:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:10.539  /dev/nbd1'
00:10:10.539     13:46:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:10.539    13:46:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:10:10.539    13:46:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:10:10.539  256+0 records in
00:10:10.539  256+0 records out
00:10:10.539  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712952 s, 147 MB/s
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:10.539  256+0 records in
00:10:10.539  256+0 records out
00:10:10.539  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305031 s, 34.4 MB/s
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:10.539  256+0 records in
00:10:10.539  256+0 records out
00:10:10.539  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307471 s, 34.1 MB/s
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:10.539   13:46:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:10.835    13:46:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:11.125    13:46:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:11.125   13:46:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:11.125    13:46:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:11.125    13:46:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:11.125     13:46:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:11.384    13:46:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:11.384     13:46:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:11.384     13:46:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:11.384    13:46:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:11.384     13:46:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:10:11.384     13:46:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:11.384     13:46:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:10:11.384    13:46:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:10:11.384    13:46:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:10:11.384   13:46:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:10:11.384   13:46:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:11.384   13:46:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:10:11.384   13:46:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:10:11.951   13:46:54 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:10:13.326  [2024-12-11 13:46:56.086585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:13.585  [2024-12-11 13:46:56.248843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:13.585  [2024-12-11 13:46:56.248851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:13.844  [2024-12-11 13:46:56.531098] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:10:13.844  [2024-12-11 13:46:56.531198] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:10:15.222   13:46:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:10:15.222  spdk_app_start Round 1
00:10:15.222   13:46:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:10:15.222   13:46:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70414 /var/tmp/spdk-nbd.sock
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70414 ']'
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:15.222  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:15.222   13:46:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:15.222   13:46:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:15.482  Malloc0
00:10:15.482   13:46:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:15.741  Malloc1
00:10:15.741   13:46:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:15.741   13:46:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:10:16.001  /dev/nbd0
00:10:16.001    13:46:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:16.001   13:46:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:16.001  1+0 records in
00:10:16.001  1+0 records out
00:10:16.001  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232177 s, 17.6 MB/s
00:10:16.001    13:46:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:16.001   13:46:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:16.001   13:46:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:16.001   13:46:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:16.001   13:46:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:10:16.259  /dev/nbd1
00:10:16.259    13:46:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:16.259   13:46:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:16.259   13:46:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:16.259   13:46:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:16.259   13:46:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:16.259   13:46:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:16.260   13:46:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:16.260   13:46:58 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:16.260   13:46:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:16.260   13:46:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:16.260   13:46:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:16.260  1+0 records in
00:10:16.260  1+0 records out
00:10:16.260  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253958 s, 16.1 MB/s
00:10:16.260    13:46:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:16.260   13:46:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:16.260   13:46:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:16.260   13:46:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:16.260   13:46:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:16.260   13:46:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:16.260   13:46:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:16.260    13:46:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:16.260    13:46:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:16.260     13:46:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:16.518    13:46:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:16.518    {
00:10:16.518      "nbd_device": "/dev/nbd0",
00:10:16.518      "bdev_name": "Malloc0"
00:10:16.518    },
00:10:16.518    {
00:10:16.518      "nbd_device": "/dev/nbd1",
00:10:16.518      "bdev_name": "Malloc1"
00:10:16.518    }
00:10:16.518  ]'
00:10:16.518     13:46:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:10:16.518    {
00:10:16.518      "nbd_device": "/dev/nbd0",
00:10:16.518      "bdev_name": "Malloc0"
00:10:16.518    },
00:10:16.518    {
00:10:16.518      "nbd_device": "/dev/nbd1",
00:10:16.518      "bdev_name": "Malloc1"
00:10:16.518    }
00:10:16.518  ]'
00:10:16.518     13:46:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:16.518    13:46:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:16.518  /dev/nbd1'
00:10:16.518     13:46:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:16.518  /dev/nbd1'
00:10:16.518     13:46:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:16.518    13:46:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:10:16.518    13:46:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:10:16.777   13:46:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:10:16.778  256+0 records in
00:10:16.778  256+0 records out
00:10:16.778  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660197 s, 159 MB/s
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:16.778  256+0 records in
00:10:16.778  256+0 records out
00:10:16.778  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023835 s, 44.0 MB/s
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:16.778  256+0 records in
00:10:16.778  256+0 records out
00:10:16.778  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286183 s, 36.6 MB/s
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:16.778   13:46:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:17.037    13:46:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:17.037   13:46:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:17.295    13:46:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:17.295   13:46:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:17.295    13:46:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:17.295    13:46:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:17.295     13:46:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:17.554    13:47:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:17.554     13:47:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:17.554     13:47:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:17.554    13:47:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:17.554     13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:10:17.554     13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:17.554     13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:10:17.554    13:47:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:10:17.554    13:47:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:10:17.554   13:47:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:10:17.554   13:47:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:17.554   13:47:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:10:17.554   13:47:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:10:18.121   13:47:00 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:10:19.552  [2024-12-11 13:47:01.911152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:19.552  [2024-12-11 13:47:02.040053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:19.552  [2024-12-11 13:47:02.040053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:19.552  [2024-12-11 13:47:02.280461] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:10:19.552  [2024-12-11 13:47:02.280712] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:10:20.928  spdk_app_start Round 2
00:10:20.928   13:47:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:10:20.928   13:47:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:10:20.928   13:47:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70414 /var/tmp/spdk-nbd.sock
00:10:20.928   13:47:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70414 ']'
00:10:20.928   13:47:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:20.928   13:47:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:20.928   13:47:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:20.928  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:20.928   13:47:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:20.928   13:47:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:21.187   13:47:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:21.187   13:47:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:21.187   13:47:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:21.445  Malloc0
00:10:21.445   13:47:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:10:21.704  Malloc1
00:10:21.704   13:47:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:10:21.704   13:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:21.705   13:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:21.705   13:47:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:10:21.963  /dev/nbd0
00:10:21.963    13:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:21.963   13:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:21.963  1+0 records in
00:10:21.963  1+0 records out
00:10:21.963  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286558 s, 14.3 MB/s
00:10:21.963    13:47:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:21.963   13:47:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:21.963   13:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:21.963   13:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:21.963   13:47:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:10:22.222  /dev/nbd1
00:10:22.481    13:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:22.481   13:47:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:10:22.481  1+0 records in
00:10:22.481  1+0 records out
00:10:22.481  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283359 s, 14.5 MB/s
00:10:22.481    13:47:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:22.481   13:47:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:10:22.481   13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:22.481   13:47:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:22.481    13:47:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:22.481    13:47:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:22.481     13:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:22.739    13:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:22.739    {
00:10:22.739      "nbd_device": "/dev/nbd0",
00:10:22.739      "bdev_name": "Malloc0"
00:10:22.739    },
00:10:22.739    {
00:10:22.739      "nbd_device": "/dev/nbd1",
00:10:22.739      "bdev_name": "Malloc1"
00:10:22.739    }
00:10:22.739  ]'
00:10:22.739     13:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:10:22.739    {
00:10:22.739      "nbd_device": "/dev/nbd0",
00:10:22.739      "bdev_name": "Malloc0"
00:10:22.739    },
00:10:22.739    {
00:10:22.739      "nbd_device": "/dev/nbd1",
00:10:22.739      "bdev_name": "Malloc1"
00:10:22.739    }
00:10:22.739  ]'
00:10:22.739     13:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:22.739    13:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:22.739  /dev/nbd1'
00:10:22.739     13:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:22.739  /dev/nbd1'
00:10:22.739     13:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:22.739    13:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:10:22.739    13:47:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:10:22.739  256+0 records in
00:10:22.739  256+0 records out
00:10:22.739  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00856693 s, 122 MB/s
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:22.739  256+0 records in
00:10:22.739  256+0 records out
00:10:22.739  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244835 s, 42.8 MB/s
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:22.739  256+0 records in
00:10:22.739  256+0 records out
00:10:22.739  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265382 s, 39.5 MB/s
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:22.739   13:47:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:22.997    13:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:22.997   13:47:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:23.256    13:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:10:23.256   13:47:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:10:23.256    13:47:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:23.256    13:47:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:23.256     13:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:23.514    13:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:23.514     13:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:23.514     13:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:23.514    13:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:23.514     13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:23.514     13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:10:23.514     13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:10:23.514    13:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:10:23.514    13:47:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:10:23.514   13:47:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:10:23.514   13:47:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:23.514   13:47:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:10:23.514   13:47:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:10:23.773   13:47:06 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:10:25.167  [2024-12-11 13:47:07.836500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:25.426  [2024-12-11 13:47:07.965852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:10:25.426  [2024-12-11 13:47:07.965852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:25.684  [2024-12-11 13:47:08.210479] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:10:25.684  [2024-12-11 13:47:08.210562] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:10:27.060  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:27.060   13:47:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70414 /var/tmp/spdk-nbd.sock
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70414 ']'
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:27.060   13:47:09 event.app_repeat -- event/event.sh@39 -- # killprocess 70414
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70414 ']'
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70414
00:10:27.060    13:47:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:10:27.060   13:47:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:27.060    13:47:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70414
00:10:27.320  killing process with pid 70414
00:10:27.320   13:47:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:27.320   13:47:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:27.320   13:47:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70414'
00:10:27.320   13:47:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70414
00:10:27.320   13:47:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70414
00:10:28.256  spdk_app_start is called in Round 0.
00:10:28.256  Shutdown signal received, stop current app iteration
00:10:28.256  Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 reinitialization...
00:10:28.256  spdk_app_start is called in Round 1.
00:10:28.256  Shutdown signal received, stop current app iteration
00:10:28.256  Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 reinitialization...
00:10:28.256  spdk_app_start is called in Round 2.
00:10:28.256  Shutdown signal received, stop current app iteration
00:10:28.256  Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 reinitialization...
00:10:28.256  spdk_app_start is called in Round 3.
00:10:28.256  Shutdown signal received, stop current app iteration
00:10:28.256   13:47:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:10:28.256   13:47:11 event.app_repeat -- event/event.sh@42 -- # return 0
00:10:28.256  
00:10:28.256  real	0m20.118s
00:10:28.256  user	0m42.668s
00:10:28.256  sys	0m3.499s
00:10:28.256   13:47:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:28.256   13:47:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:28.256  ************************************
00:10:28.256  END TEST app_repeat
00:10:28.256  ************************************
00:10:28.516   13:47:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:10:28.516   13:47:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:10:28.516   13:47:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:28.516   13:47:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:28.516   13:47:11 event -- common/autotest_common.sh@10 -- # set +x
00:10:28.516  ************************************
00:10:28.516  START TEST cpu_locks
00:10:28.516  ************************************
00:10:28.516   13:47:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:10:28.516  * Looking for test storage...
00:10:28.516  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:28.516     13:47:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:10:28.516     13:47:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:28.516     13:47:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:28.516    13:47:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:28.516  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:28.516  		--rc genhtml_branch_coverage=1
00:10:28.516  		--rc genhtml_function_coverage=1
00:10:28.516  		--rc genhtml_legend=1
00:10:28.516  		--rc geninfo_all_blocks=1
00:10:28.516  		--rc geninfo_unexecuted_blocks=1
00:10:28.516  		
00:10:28.516  		'
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:28.516  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:28.516  		--rc genhtml_branch_coverage=1
00:10:28.516  		--rc genhtml_function_coverage=1
00:10:28.516  		--rc genhtml_legend=1
00:10:28.516  		--rc geninfo_all_blocks=1
00:10:28.516  		--rc geninfo_unexecuted_blocks=1
00:10:28.516  		
00:10:28.516  		'
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:28.516  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:28.516  		--rc genhtml_branch_coverage=1
00:10:28.516  		--rc genhtml_function_coverage=1
00:10:28.516  		--rc genhtml_legend=1
00:10:28.516  		--rc geninfo_all_blocks=1
00:10:28.516  		--rc geninfo_unexecuted_blocks=1
00:10:28.516  		
00:10:28.516  		'
00:10:28.516    13:47:11 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:28.516  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:28.516  		--rc genhtml_branch_coverage=1
00:10:28.516  		--rc genhtml_function_coverage=1
00:10:28.516  		--rc genhtml_legend=1
00:10:28.516  		--rc geninfo_all_blocks=1
00:10:28.516  		--rc geninfo_unexecuted_blocks=1
00:10:28.516  		
00:10:28.516  		'
00:10:28.516   13:47:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:10:28.516   13:47:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:10:28.516   13:47:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:10:28.516   13:47:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:10:28.516   13:47:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:28.516   13:47:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:28.516   13:47:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:28.775  ************************************
00:10:28.775  START TEST default_locks
00:10:28.775  ************************************
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70909
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70909
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70909 ']'
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:28.775  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:28.775   13:47:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:28.775  [2024-12-11 13:47:11.398655] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:28.775  [2024-12-11 13:47:11.398828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70909 ]
00:10:29.032  [2024-12-11 13:47:11.590227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:29.032  [2024-12-11 13:47:11.720807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:30.407   13:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:30.408   13:47:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:10:30.408   13:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70909
00:10:30.408   13:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70909
00:10:30.408   13:47:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:30.408   13:47:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70909
00:10:30.408   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70909 ']'
00:10:30.408   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70909
00:10:30.408    13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:10:30.408   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:30.408    13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70909
00:10:30.666   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:30.666   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:30.666  killing process with pid 70909
00:10:30.666   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70909'
00:10:30.666   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70909
00:10:30.666   13:47:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70909
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70909
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70909
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:33.236    13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70909
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70909 ']'
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:33.236  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:33.236  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70909) - No such process
00:10:33.236  ERROR: process (pid: 70909) is no longer running
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:10:33.236  
00:10:33.236  real	0m4.389s
00:10:33.236  user	0m4.258s
00:10:33.236  sys	0m0.786s
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:33.236   13:47:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:33.236  ************************************
00:10:33.236  END TEST default_locks
00:10:33.236  ************************************
00:10:33.236   13:47:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:10:33.236   13:47:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:33.236   13:47:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:33.236   13:47:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:33.236  ************************************
00:10:33.236  START TEST default_locks_via_rpc
00:10:33.236  ************************************
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70990
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70990
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70990 ']'
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:33.236   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:33.236  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:33.237   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:33.237   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:33.237   13:47:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:33.237  [2024-12-11 13:47:15.853210] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:33.237  [2024-12-11 13:47:15.853385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ]
00:10:33.495  [2024-12-11 13:47:16.046725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:33.495  [2024-12-11 13:47:16.174136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:34.431   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:34.431   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:34.431   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:10:34.431   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.431   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:34.690   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.690   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:10:34.690   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:10:34.690   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70990
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70990
00:10:34.691   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70990
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70990 ']'
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70990
00:10:35.259    13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:35.259    13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70990
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:35.259  killing process with pid 70990
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70990'
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70990
00:10:35.259   13:47:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70990
00:10:37.821  
00:10:37.821  real	0m4.582s
00:10:37.821  user	0m4.569s
00:10:37.821  sys	0m0.858s
00:10:37.821   13:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:37.821  ************************************
00:10:37.821  END TEST default_locks_via_rpc
00:10:37.821  ************************************
00:10:37.821   13:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:37.821   13:47:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:10:37.821   13:47:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:37.821   13:47:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:37.821   13:47:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:37.821  ************************************
00:10:37.821  START TEST non_locking_app_on_locked_coremask
00:10:37.821  ************************************
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71069
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71069 /var/tmp/spdk.sock
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71069 ']'
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:37.821  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:37.821   13:47:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:37.821  [2024-12-11 13:47:20.502731] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:37.821  [2024-12-11 13:47:20.502909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71069 ]
00:10:38.082  [2024-12-11 13:47:20.697820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:38.082  [2024-12-11 13:47:20.832714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71091
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71091 /var/tmp/spdk2.sock
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71091 ']'
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:39.460  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:39.460   13:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:39.460  [2024-12-11 13:47:21.951098] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:39.460  [2024-12-11 13:47:21.951268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71091 ]
00:10:39.460  [2024-12-11 13:47:22.142622] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:39.460  [2024-12-11 13:47:22.146692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:39.718  [2024-12-11 13:47:22.405120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:42.256   13:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:42.256   13:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:42.256   13:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71069
00:10:42.256   13:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71069
00:10:42.256   13:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71069
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71069 ']'
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71069
00:10:42.827    13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:42.827    13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71069
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71069'
00:10:42.827  killing process with pid 71069
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71069
00:10:42.827   13:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71069
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71091
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71091 ']'
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71091
00:10:48.163    13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:48.163    13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71091
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:48.163  killing process with pid 71091
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71091'
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71091
00:10:48.163   13:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71091
00:10:50.068  
00:10:50.068  real	0m12.433s
00:10:50.068  user	0m12.764s
00:10:50.068  sys	0m1.593s
00:10:50.068   13:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:50.068   13:47:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:50.068  ************************************
00:10:50.068  END TEST non_locking_app_on_locked_coremask
00:10:50.068  ************************************
00:10:50.328   13:47:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:10:50.328   13:47:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:50.328   13:47:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:50.328   13:47:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:50.328  ************************************
00:10:50.328  START TEST locking_app_on_unlocked_coremask
00:10:50.328  ************************************
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71239
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71239 /var/tmp/spdk.sock
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71239 ']'
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:50.328  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:50.328   13:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:50.328  [2024-12-11 13:47:33.003826] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:50.328  [2024-12-11 13:47:33.004048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71239 ]
00:10:50.587  [2024-12-11 13:47:33.197657] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:50.587  [2024-12-11 13:47:33.197710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:50.587  [2024-12-11 13:47:33.330104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71261
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71261 /var/tmp/spdk2.sock
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71261 ']'
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:51.965  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:51.965   13:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:51.965  [2024-12-11 13:47:34.426575] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:10:51.965  [2024-12-11 13:47:34.426762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71261 ]
00:10:51.965  [2024-12-11 13:47:34.628261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:52.224  [2024-12-11 13:47:34.896987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:10:54.805   13:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:54.805   13:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:54.805   13:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71261
00:10:54.805   13:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:54.805   13:47:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71261
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71239
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71239 ']'
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71239
00:10:55.741    13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:55.741    13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71239
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71239'
00:10:55.741  killing process with pid 71239
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71239
00:10:55.741   13:47:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71239
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71261
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71261 ']'
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71261
00:11:01.012    13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:01.012    13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71261
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71261'
00:11:01.012  killing process with pid 71261
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71261
00:11:01.012   13:47:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71261
00:11:03.545  ************************************
00:11:03.545  END TEST locking_app_on_unlocked_coremask
00:11:03.545  ************************************
00:11:03.545  
00:11:03.545  real	0m13.001s
00:11:03.545  user	0m13.344s
00:11:03.545  sys	0m1.775s
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:03.545   13:47:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:11:03.545   13:47:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:03.545   13:47:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:03.545   13:47:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:11:03.545  ************************************
00:11:03.545  START TEST locking_app_on_locked_coremask
00:11:03.545  ************************************
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71420
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71420 /var/tmp/spdk.sock
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71420 ']'
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:03.545  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:03.545   13:47:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:03.545  [2024-12-11 13:47:46.075525] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:03.545  [2024-12-11 13:47:46.075727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71420 ]
00:11:03.545  [2024-12-11 13:47:46.270889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:03.803  [2024-12-11 13:47:46.400822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71441
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71441 /var/tmp/spdk2.sock
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71441 /var/tmp/spdk2.sock
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:04.739    13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71441 /var/tmp/spdk2.sock
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71441 ']'
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:11:04.739  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:04.739   13:47:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:04.739  [2024-12-11 13:47:47.507544] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:04.740  [2024-12-11 13:47:47.507749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71441 ]
00:11:04.997  [2024-12-11 13:47:47.717758] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71420 has claimed it.
00:11:04.997  [2024-12-11 13:47:47.717822] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:11:05.565  ERROR: process (pid: 71441) is no longer running
00:11:05.565  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71441) - No such process
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71420
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71420
00:11:05.565   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:11:05.824   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71420
00:11:05.824   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71420 ']'
00:11:05.824   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71420
00:11:05.824    13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:11:05.824   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:05.824    13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71420
00:11:06.083  killing process with pid 71420
00:11:06.083   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:06.083   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:06.083   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71420'
00:11:06.083   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71420
00:11:06.083   13:47:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71420
00:11:08.682  
00:11:08.682  real	0m5.058s
00:11:08.682  user	0m5.263s
00:11:08.682  sys	0m0.958s
00:11:08.682   13:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:08.682   13:47:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:08.682  ************************************
00:11:08.682  END TEST locking_app_on_locked_coremask
00:11:08.682  ************************************
00:11:08.682   13:47:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:11:08.682   13:47:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:08.682   13:47:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:08.682   13:47:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:11:08.682  ************************************
00:11:08.682  START TEST locking_overlapped_coremask
00:11:08.682  ************************************
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71511
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71511 /var/tmp/spdk.sock
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71511 ']'
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:08.682  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:08.682   13:47:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:08.682  [2024-12-11 13:47:51.197830] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:08.682  [2024-12-11 13:47:51.198342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71511 ]
00:11:08.682  [2024-12-11 13:47:51.392482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:08.942  [2024-12-11 13:47:51.527038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:08.942  [2024-12-11 13:47:51.527147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:08.942  [2024-12-11 13:47:51.527184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71534
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71534 /var/tmp/spdk2.sock
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71534 /var/tmp/spdk2.sock
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:09.879    13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:11:09.879  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71534 /var/tmp/spdk2.sock
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71534 ']'
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:09.879   13:47:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:09.879  [2024-12-11 13:47:52.610889] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:09.879  [2024-12-11 13:47:52.611012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71534 ]
00:11:10.138  [2024-12-11 13:47:52.792922] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71511 has claimed it.
00:11:10.138  [2024-12-11 13:47:52.793132] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:11:10.706  ERROR: process (pid: 71534) is no longer running
00:11:10.706  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71534) - No such process
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71511
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71511 ']'
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71511
00:11:10.706    13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:10.706    13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71511
00:11:10.706  killing process with pid 71511
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71511'
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71511
00:11:10.706   13:47:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71511
00:11:13.241  ************************************
00:11:13.241  END TEST locking_overlapped_coremask
00:11:13.241  ************************************
00:11:13.241  
00:11:13.241  real	0m4.724s
00:11:13.241  user	0m12.779s
00:11:13.241  sys	0m0.734s
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:11:13.241   13:47:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:11:13.241   13:47:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:13.241   13:47:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:13.241   13:47:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:11:13.241  ************************************
00:11:13.241  START TEST locking_overlapped_coremask_via_rpc
00:11:13.241  ************************************
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71599
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71599 /var/tmp/spdk.sock
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71599 ']'
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:13.241  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:13.241   13:47:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:11:13.241  [2024-12-11 13:47:55.979321] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:13.241  [2024-12-11 13:47:55.979528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71599 ]
00:11:13.500  [2024-12-11 13:47:56.174723] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:11:13.500  [2024-12-11 13:47:56.174782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:13.758  [2024-12-11 13:47:56.310177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:13.758  [2024-12-11 13:47:56.311254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:13.758  [2024-12-11 13:47:56.311279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71617
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71617 /var/tmp/spdk2.sock
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71617 ']'
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:11:14.697  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:14.697   13:47:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:14.697  [2024-12-11 13:47:57.449351] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:14.697  [2024-12-11 13:47:57.449518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71617 ]
00:11:14.956  [2024-12-11 13:47:57.649189] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:11:14.956  [2024-12-11 13:47:57.652653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:15.215  [2024-12-11 13:47:57.929670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:15.215  [2024-12-11 13:47:57.932878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:15.215  [2024-12-11 13:47:57.932919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:17.750    13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.750  [2024-12-11 13:48:00.153882] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71599 has claimed it.
00:11:17.750  request:
00:11:17.750  {
00:11:17.750  "method": "framework_enable_cpumask_locks",
00:11:17.750  "req_id": 1
00:11:17.750  }
00:11:17.750  Got JSON-RPC error response
00:11:17.750  response:
00:11:17.750  {
00:11:17.750  "code": -32603,
00:11:17.750  "message": "Failed to claim CPU core: 2"
00:11:17.750  }
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71599 /var/tmp/spdk.sock
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71599 ']'
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:17.750  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71617 /var/tmp/spdk2.sock
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71617 ']'
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:17.750  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:17.750   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:11:18.009  
00:11:18.009  real	0m4.762s
00:11:18.009  user	0m1.442s
00:11:18.009  sys	0m0.257s
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:18.009   13:48:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:18.009  ************************************
00:11:18.009  END TEST locking_overlapped_coremask_via_rpc
00:11:18.009  ************************************
00:11:18.009   13:48:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:11:18.009   13:48:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71599 ]]
00:11:18.009   13:48:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71599
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71599 ']'
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71599
00:11:18.009    13:48:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:18.009    13:48:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71599
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:18.009  killing process with pid 71599
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71599'
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71599
00:11:18.009   13:48:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71599
00:11:20.541   13:48:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71617 ]]
00:11:20.541   13:48:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71617
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71617 ']'
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71617
00:11:20.541    13:48:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:20.541    13:48:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71617
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:11:20.541  killing process with pid 71617
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71617'
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71617
00:11:20.541   13:48:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71617
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71599 ]]
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71599
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71599 ']'
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71599
00:11:23.071  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71599) - No such process
00:11:23.071  Process with pid 71599 is not found
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71599 is not found'
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71617 ]]
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71617
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71617 ']'
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71617
00:11:23.071  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71617) - No such process
00:11:23.071  Process with pid 71617 is not found
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71617 is not found'
00:11:23.071   13:48:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:11:23.071  
00:11:23.071  real	0m54.730s
00:11:23.071  user	1m32.417s
00:11:23.071  sys	0m8.383s
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:23.071   13:48:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:11:23.071  ************************************
00:11:23.071  END TEST cpu_locks
00:11:23.071  ************************************
00:11:23.330  ************************************
00:11:23.330  END TEST event
00:11:23.330  ************************************
00:11:23.330  
00:11:23.330  real	1m25.811s
00:11:23.330  user	2m31.928s
00:11:23.331  sys	0m13.240s
00:11:23.331   13:48:05 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:23.331   13:48:05 event -- common/autotest_common.sh@10 -- # set +x
00:11:23.331   13:48:05  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:11:23.331   13:48:05  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:23.331   13:48:05  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:23.331   13:48:05  -- common/autotest_common.sh@10 -- # set +x
00:11:23.331  ************************************
00:11:23.331  START TEST thread
00:11:23.331  ************************************
00:11:23.331   13:48:05 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:11:23.331  * Looking for test storage...
00:11:23.331  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:23.331     13:48:06 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:11:23.331     13:48:06 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:23.331    13:48:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:23.331    13:48:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:23.331    13:48:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:23.331    13:48:06 thread -- scripts/common.sh@336 -- # IFS=.-:
00:11:23.331    13:48:06 thread -- scripts/common.sh@336 -- # read -ra ver1
00:11:23.331    13:48:06 thread -- scripts/common.sh@337 -- # IFS=.-:
00:11:23.331    13:48:06 thread -- scripts/common.sh@337 -- # read -ra ver2
00:11:23.331    13:48:06 thread -- scripts/common.sh@338 -- # local 'op=<'
00:11:23.331    13:48:06 thread -- scripts/common.sh@340 -- # ver1_l=2
00:11:23.331    13:48:06 thread -- scripts/common.sh@341 -- # ver2_l=1
00:11:23.331    13:48:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:23.331    13:48:06 thread -- scripts/common.sh@344 -- # case "$op" in
00:11:23.331    13:48:06 thread -- scripts/common.sh@345 -- # : 1
00:11:23.331    13:48:06 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:23.331    13:48:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:23.331     13:48:06 thread -- scripts/common.sh@365 -- # decimal 1
00:11:23.331     13:48:06 thread -- scripts/common.sh@353 -- # local d=1
00:11:23.331     13:48:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:23.331     13:48:06 thread -- scripts/common.sh@355 -- # echo 1
00:11:23.331    13:48:06 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:11:23.331     13:48:06 thread -- scripts/common.sh@366 -- # decimal 2
00:11:23.331     13:48:06 thread -- scripts/common.sh@353 -- # local d=2
00:11:23.331     13:48:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:23.331     13:48:06 thread -- scripts/common.sh@355 -- # echo 2
00:11:23.331    13:48:06 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:11:23.331    13:48:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:23.331    13:48:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:23.331    13:48:06 thread -- scripts/common.sh@368 -- # return 0
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:23.331  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:23.331  		--rc genhtml_branch_coverage=1
00:11:23.331  		--rc genhtml_function_coverage=1
00:11:23.331  		--rc genhtml_legend=1
00:11:23.331  		--rc geninfo_all_blocks=1
00:11:23.331  		--rc geninfo_unexecuted_blocks=1
00:11:23.331  		
00:11:23.331  		'
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:23.331  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:23.331  		--rc genhtml_branch_coverage=1
00:11:23.331  		--rc genhtml_function_coverage=1
00:11:23.331  		--rc genhtml_legend=1
00:11:23.331  		--rc geninfo_all_blocks=1
00:11:23.331  		--rc geninfo_unexecuted_blocks=1
00:11:23.331  		
00:11:23.331  		'
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:23.331  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:23.331  		--rc genhtml_branch_coverage=1
00:11:23.331  		--rc genhtml_function_coverage=1
00:11:23.331  		--rc genhtml_legend=1
00:11:23.331  		--rc geninfo_all_blocks=1
00:11:23.331  		--rc geninfo_unexecuted_blocks=1
00:11:23.331  		
00:11:23.331  		'
00:11:23.331    13:48:06 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:23.331  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:23.331  		--rc genhtml_branch_coverage=1
00:11:23.331  		--rc genhtml_function_coverage=1
00:11:23.331  		--rc genhtml_legend=1
00:11:23.331  		--rc geninfo_all_blocks=1
00:11:23.331  		--rc geninfo_unexecuted_blocks=1
00:11:23.331  		
00:11:23.331  		'
00:11:23.331   13:48:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:11:23.331   13:48:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:11:23.331   13:48:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:23.331   13:48:06 thread -- common/autotest_common.sh@10 -- # set +x
00:11:23.590  ************************************
00:11:23.590  START TEST thread_poller_perf
00:11:23.590  ************************************
00:11:23.590   13:48:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:11:23.590  [2024-12-11 13:48:06.155424] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:23.590  [2024-12-11 13:48:06.155794] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71818 ]
00:11:23.590  [2024-12-11 13:48:06.343243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:23.849  [2024-12-11 13:48:06.513816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:23.849  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:11:25.228  
[2024-12-11T13:48:08.000Z]  ======================================
00:11:25.228  
[2024-12-11T13:48:08.000Z]  busy:2112603952 (cyc)
00:11:25.228  
[2024-12-11T13:48:08.000Z]  total_run_count: 358000
00:11:25.228  
[2024-12-11T13:48:08.000Z]  tsc_hz: 2100000000 (cyc)
00:11:25.228  
[2024-12-11T13:48:08.000Z]  ======================================
00:11:25.228  
[2024-12-11T13:48:08.000Z]  poller_cost: 5901 (cyc), 2810 (nsec)
00:11:25.228  
00:11:25.228  real	0m1.657s
00:11:25.228  user	0m1.442s
00:11:25.228  sys	0m0.114s
00:11:25.228   13:48:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:25.228   13:48:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:11:25.228  ************************************
00:11:25.228  END TEST thread_poller_perf
00:11:25.228  ************************************
00:11:25.228   13:48:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:11:25.228   13:48:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:11:25.228   13:48:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:25.228   13:48:07 thread -- common/autotest_common.sh@10 -- # set +x
00:11:25.228  ************************************
00:11:25.228  START TEST thread_poller_perf
00:11:25.228  ************************************
00:11:25.228   13:48:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:11:25.228  [2024-12-11 13:48:07.881768] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:25.228  [2024-12-11 13:48:07.881927] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71854 ]
00:11:25.487  [2024-12-11 13:48:08.081829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:25.487  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:11:25.487  [2024-12-11 13:48:08.192437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:26.865  
[2024-12-11T13:48:09.637Z]  ======================================
00:11:26.865  
[2024-12-11T13:48:09.637Z]  busy:2103993654 (cyc)
00:11:26.865  
[2024-12-11T13:48:09.637Z]  total_run_count: 4585000
00:11:26.865  
[2024-12-11T13:48:09.637Z]  tsc_hz: 2100000000 (cyc)
00:11:26.865  
[2024-12-11T13:48:09.637Z]  ======================================
00:11:26.865  
[2024-12-11T13:48:09.637Z]  poller_cost: 458 (cyc), 218 (nsec)
00:11:26.865  
00:11:26.865  real	0m1.614s
00:11:26.865  user	0m1.382s
00:11:26.865  sys	0m0.130s
00:11:26.865   13:48:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:26.865  ************************************
00:11:26.865  END TEST thread_poller_perf
00:11:26.865  ************************************
00:11:26.865   13:48:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:11:26.865   13:48:09 thread -- thread/thread.sh@17 -- # [[ n != \y ]]
00:11:26.865   13:48:09 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:11:26.865   13:48:09 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:26.865   13:48:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:26.865   13:48:09 thread -- common/autotest_common.sh@10 -- # set +x
00:11:26.865  ************************************
00:11:26.865  START TEST thread_spdk_lock
00:11:26.865  ************************************
00:11:26.865   13:48:09 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:11:26.865  [2024-12-11 13:48:09.556951] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:26.865  [2024-12-11 13:48:09.557111] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71891 ]
00:11:27.123  [2024-12-11 13:48:09.741973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:27.124  [2024-12-11 13:48:09.866347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:27.124  [2024-12-11 13:48:09.866357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:27.690  [2024-12-11 13:48:10.399878] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 990:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:11:27.690  [2024-12-11 13:48:10.399963] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3214:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:11:27.690  [2024-12-11 13:48:10.399980] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x5660c0fc8080
00:11:27.690  [2024-12-11 13:48:10.406362] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:11:27.690  [2024-12-11 13:48:10.406460] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1051:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:11:27.690  [2024-12-11 13:48:10.406508] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:11:27.948  Starting test contend
00:11:27.948    Worker    Delay  Wait us  Hold us Total us
00:11:27.948         0        3   127373   192222   319596
00:11:27.948         1        5    41537   298164   339702
00:11:27.948  PASS test contend
00:11:27.948  Starting test hold_by_poller
00:11:27.948  PASS test hold_by_poller
00:11:27.948  Starting test hold_by_message
00:11:27.948  PASS test hold_by_message
00:11:27.948  /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary:
00:11:27.948     100014 assertions passed
00:11:27.948          0 assertions failed
00:11:27.948  
00:11:27.948  real	0m1.164s
00:11:27.948  user	0m1.489s
00:11:27.948  sys	0m0.120s
00:11:27.948  ************************************
00:11:27.948  END TEST thread_spdk_lock
00:11:27.948  ************************************
00:11:27.948   13:48:10 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:27.948   13:48:10 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x
00:11:28.206  ************************************
00:11:28.206  END TEST thread
00:11:28.206  ************************************
00:11:28.206  
00:11:28.206  real	0m4.805s
00:11:28.206  user	0m4.459s
00:11:28.206  sys	0m0.596s
00:11:28.206   13:48:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:28.206   13:48:10 thread -- common/autotest_common.sh@10 -- # set +x
00:11:28.206   13:48:10  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:11:28.206   13:48:10  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:11:28.206   13:48:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:28.206   13:48:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:28.206   13:48:10  -- common/autotest_common.sh@10 -- # set +x
00:11:28.206  ************************************
00:11:28.206  START TEST app_cmdline
00:11:28.206  ************************************
00:11:28.206   13:48:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:11:28.206  * Looking for test storage...
00:11:28.206  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:11:28.206    13:48:10 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:28.206     13:48:10 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:28.206     13:48:10 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:11:28.206    13:48:10 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@345 -- # : 1
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:28.206     13:48:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:11:28.206     13:48:10 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:11:28.206     13:48:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:28.206     13:48:10 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:11:28.206    13:48:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:11:28.206     13:48:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:11:28.206     13:48:10 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:11:28.464     13:48:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:28.464     13:48:10 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:11:28.464  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:28.464    13:48:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:11:28.464    13:48:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:28.464    13:48:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:28.465    13:48:10 app_cmdline -- scripts/common.sh@368 -- # return 0
00:11:28.465    13:48:10 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:28.465    13:48:10 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:28.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:28.465  		--rc genhtml_branch_coverage=1
00:11:28.465  		--rc genhtml_function_coverage=1
00:11:28.465  		--rc genhtml_legend=1
00:11:28.465  		--rc geninfo_all_blocks=1
00:11:28.465  		--rc geninfo_unexecuted_blocks=1
00:11:28.465  		
00:11:28.465  		'
00:11:28.465    13:48:10 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:28.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:28.465  		--rc genhtml_branch_coverage=1
00:11:28.465  		--rc genhtml_function_coverage=1
00:11:28.465  		--rc genhtml_legend=1
00:11:28.465  		--rc geninfo_all_blocks=1
00:11:28.465  		--rc geninfo_unexecuted_blocks=1
00:11:28.465  		
00:11:28.465  		'
00:11:28.465    13:48:10 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:28.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:28.465  		--rc genhtml_branch_coverage=1
00:11:28.465  		--rc genhtml_function_coverage=1
00:11:28.465  		--rc genhtml_legend=1
00:11:28.465  		--rc geninfo_all_blocks=1
00:11:28.465  		--rc geninfo_unexecuted_blocks=1
00:11:28.465  		
00:11:28.465  		'
00:11:28.465    13:48:10 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:28.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:28.465  		--rc genhtml_branch_coverage=1
00:11:28.465  		--rc genhtml_function_coverage=1
00:11:28.465  		--rc genhtml_legend=1
00:11:28.465  		--rc geninfo_all_blocks=1
00:11:28.465  		--rc geninfo_unexecuted_blocks=1
00:11:28.465  		
00:11:28.465  		'
00:11:28.465   13:48:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:11:28.465   13:48:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71975
00:11:28.465   13:48:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:11:28.465   13:48:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71975
00:11:28.465   13:48:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71975 ']'
00:11:28.465   13:48:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:28.465   13:48:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:28.465   13:48:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:28.465   13:48:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:28.465   13:48:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:11:28.465  [2024-12-11 13:48:11.081711] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:28.465  [2024-12-11 13:48:11.082735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71975 ]
00:11:28.722  [2024-12-11 13:48:11.278449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:28.722  [2024-12-11 13:48:11.409606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:11:30.094  {
00:11:30.094    "version": "SPDK v25.01-pre git sha1 3aefe4228",
00:11:30.094    "fields": {
00:11:30.094      "major": 25,
00:11:30.094      "minor": 1,
00:11:30.094      "patch": 0,
00:11:30.094      "suffix": "-pre",
00:11:30.094      "commit": "3aefe4228"
00:11:30.094    }
00:11:30.094  }
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:11:30.094    13:48:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:11:30.094    13:48:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:30.094    13:48:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:11:30.094    13:48:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:11:30.094    13:48:12 app_cmdline -- app/cmdline.sh@26 -- # sort
00:11:30.094    13:48:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:11:30.094   13:48:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:30.094    13:48:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:30.094    13:48:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:11:30.094   13:48:12 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:30.351  request:
00:11:30.351  {
00:11:30.351    "method": "env_dpdk_get_mem_stats",
00:11:30.351    "req_id": 1
00:11:30.351  }
00:11:30.351  Got JSON-RPC error response
00:11:30.351  response:
00:11:30.351  {
00:11:30.351    "code": -32601,
00:11:30.351    "message": "Method not found"
00:11:30.351  }
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:30.351   13:48:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71975
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71975 ']'
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71975
00:11:30.351    13:48:12 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:11:30.351   13:48:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:30.351    13:48:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71975
00:11:30.351  killing process with pid 71975
00:11:30.351   13:48:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:30.351   13:48:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:30.351   13:48:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71975'
00:11:30.351   13:48:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 71975
00:11:30.351   13:48:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 71975
00:11:32.880  ************************************
00:11:32.880  END TEST app_cmdline
00:11:32.880  ************************************
00:11:32.880  
00:11:32.880  real	0m4.647s
00:11:32.880  user	0m4.903s
00:11:32.880  sys	0m0.727s
00:11:32.880   13:48:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:32.880   13:48:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:11:32.880   13:48:15  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:11:32.880   13:48:15  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:32.880   13:48:15  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:32.880   13:48:15  -- common/autotest_common.sh@10 -- # set +x
00:11:32.880  ************************************
00:11:32.880  START TEST version
00:11:32.880  ************************************
00:11:32.880   13:48:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:11:32.880  * Looking for test storage...
00:11:32.880  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:11:32.880    13:48:15 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:32.880     13:48:15 version -- common/autotest_common.sh@1711 -- # lcov --version
00:11:32.880     13:48:15 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:33.139    13:48:15 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:33.139    13:48:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:33.139    13:48:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:33.139    13:48:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:33.139    13:48:15 version -- scripts/common.sh@336 -- # IFS=.-:
00:11:33.139    13:48:15 version -- scripts/common.sh@336 -- # read -ra ver1
00:11:33.139    13:48:15 version -- scripts/common.sh@337 -- # IFS=.-:
00:11:33.139    13:48:15 version -- scripts/common.sh@337 -- # read -ra ver2
00:11:33.139    13:48:15 version -- scripts/common.sh@338 -- # local 'op=<'
00:11:33.139    13:48:15 version -- scripts/common.sh@340 -- # ver1_l=2
00:11:33.139    13:48:15 version -- scripts/common.sh@341 -- # ver2_l=1
00:11:33.139    13:48:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:33.139    13:48:15 version -- scripts/common.sh@344 -- # case "$op" in
00:11:33.139    13:48:15 version -- scripts/common.sh@345 -- # : 1
00:11:33.139    13:48:15 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:33.139    13:48:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:33.139     13:48:15 version -- scripts/common.sh@365 -- # decimal 1
00:11:33.139     13:48:15 version -- scripts/common.sh@353 -- # local d=1
00:11:33.139     13:48:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:33.139     13:48:15 version -- scripts/common.sh@355 -- # echo 1
00:11:33.139    13:48:15 version -- scripts/common.sh@365 -- # ver1[v]=1
00:11:33.139     13:48:15 version -- scripts/common.sh@366 -- # decimal 2
00:11:33.139     13:48:15 version -- scripts/common.sh@353 -- # local d=2
00:11:33.139     13:48:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:33.139     13:48:15 version -- scripts/common.sh@355 -- # echo 2
00:11:33.139    13:48:15 version -- scripts/common.sh@366 -- # ver2[v]=2
00:11:33.139    13:48:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:33.139    13:48:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:33.139    13:48:15 version -- scripts/common.sh@368 -- # return 0
00:11:33.139    13:48:15 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:33.139    13:48:15 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:33.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.139  		--rc genhtml_branch_coverage=1
00:11:33.139  		--rc genhtml_function_coverage=1
00:11:33.139  		--rc genhtml_legend=1
00:11:33.139  		--rc geninfo_all_blocks=1
00:11:33.139  		--rc geninfo_unexecuted_blocks=1
00:11:33.139  		
00:11:33.139  		'
00:11:33.139    13:48:15 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:33.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.139  		--rc genhtml_branch_coverage=1
00:11:33.139  		--rc genhtml_function_coverage=1
00:11:33.139  		--rc genhtml_legend=1
00:11:33.139  		--rc geninfo_all_blocks=1
00:11:33.139  		--rc geninfo_unexecuted_blocks=1
00:11:33.139  		
00:11:33.139  		'
00:11:33.139    13:48:15 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:33.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.139  		--rc genhtml_branch_coverage=1
00:11:33.139  		--rc genhtml_function_coverage=1
00:11:33.139  		--rc genhtml_legend=1
00:11:33.139  		--rc geninfo_all_blocks=1
00:11:33.139  		--rc geninfo_unexecuted_blocks=1
00:11:33.139  		
00:11:33.139  		'
00:11:33.139    13:48:15 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:33.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.139  		--rc genhtml_branch_coverage=1
00:11:33.139  		--rc genhtml_function_coverage=1
00:11:33.139  		--rc genhtml_legend=1
00:11:33.139  		--rc geninfo_all_blocks=1
00:11:33.139  		--rc geninfo_unexecuted_blocks=1
00:11:33.139  		
00:11:33.139  		'
00:11:33.139    13:48:15 version -- app/version.sh@17 -- # get_header_version major
00:11:33.139    13:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # tr -d '"'
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # cut -f2
00:11:33.139   13:48:15 version -- app/version.sh@17 -- # major=25
00:11:33.139    13:48:15 version -- app/version.sh@18 -- # get_header_version minor
00:11:33.139    13:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # tr -d '"'
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # cut -f2
00:11:33.139   13:48:15 version -- app/version.sh@18 -- # minor=1
00:11:33.139    13:48:15 version -- app/version.sh@19 -- # get_header_version patch
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # cut -f2
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # tr -d '"'
00:11:33.139    13:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:33.139   13:48:15 version -- app/version.sh@19 -- # patch=0
00:11:33.139    13:48:15 version -- app/version.sh@20 -- # get_header_version suffix
00:11:33.139    13:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # cut -f2
00:11:33.139    13:48:15 version -- app/version.sh@14 -- # tr -d '"'
00:11:33.139   13:48:15 version -- app/version.sh@20 -- # suffix=-pre
00:11:33.139   13:48:15 version -- app/version.sh@22 -- # version=25.1
00:11:33.139   13:48:15 version -- app/version.sh@25 -- # (( patch != 0 ))
00:11:33.139   13:48:15 version -- app/version.sh@28 -- # version=25.1rc0
00:11:33.139   13:48:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:33.139    13:48:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:11:33.139   13:48:15 version -- app/version.sh@30 -- # py_version=25.1rc0
00:11:33.139   13:48:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:11:33.139  
00:11:33.139  real	0m0.276s
00:11:33.139  user	0m0.158s
00:11:33.139  sys	0m0.169s
00:11:33.139  ************************************
00:11:33.139  END TEST version
00:11:33.139  ************************************
00:11:33.139   13:48:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:33.139   13:48:15 version -- common/autotest_common.sh@10 -- # set +x
00:11:33.139   13:48:15  -- spdk/autotest.sh@179 -- # '[' 1 -eq 1 ']'
00:11:33.139   13:48:15  -- spdk/autotest.sh@180 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:11:33.139   13:48:15  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:33.139   13:48:15  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:33.139   13:48:15  -- common/autotest_common.sh@10 -- # set +x
00:11:33.139  ************************************
00:11:33.139  START TEST blockdev_general
00:11:33.139  ************************************
00:11:33.139   13:48:15 blockdev_general -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:11:33.139  * Looking for test storage...
00:11:33.440  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:11:33.440    13:48:15 blockdev_general -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:33.440     13:48:15 blockdev_general -- common/autotest_common.sh@1711 -- # lcov --version
00:11:33.440     13:48:15 blockdev_general -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:33.440    13:48:16 blockdev_general -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@336 -- # IFS=.-:
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@336 -- # read -ra ver1
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@337 -- # IFS=.-:
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@337 -- # read -ra ver2
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@338 -- # local 'op=<'
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@340 -- # ver1_l=2
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@341 -- # ver2_l=1
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@344 -- # case "$op" in
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@345 -- # : 1
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@365 -- # decimal 1
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@353 -- # local d=1
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@355 -- # echo 1
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@365 -- # ver1[v]=1
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@366 -- # decimal 2
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@353 -- # local d=2
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:33.440     13:48:16 blockdev_general -- scripts/common.sh@355 -- # echo 2
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@366 -- # ver2[v]=2
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:33.440    13:48:16 blockdev_general -- scripts/common.sh@368 -- # return 0
00:11:33.440    13:48:16 blockdev_general -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:33.440    13:48:16 blockdev_general -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:33.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.440  		--rc genhtml_branch_coverage=1
00:11:33.440  		--rc genhtml_function_coverage=1
00:11:33.440  		--rc genhtml_legend=1
00:11:33.440  		--rc geninfo_all_blocks=1
00:11:33.440  		--rc geninfo_unexecuted_blocks=1
00:11:33.440  		
00:11:33.440  		'
00:11:33.440    13:48:16 blockdev_general -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:33.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.440  		--rc genhtml_branch_coverage=1
00:11:33.440  		--rc genhtml_function_coverage=1
00:11:33.440  		--rc genhtml_legend=1
00:11:33.440  		--rc geninfo_all_blocks=1
00:11:33.440  		--rc geninfo_unexecuted_blocks=1
00:11:33.440  		
00:11:33.440  		'
00:11:33.440    13:48:16 blockdev_general -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:33.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.440  		--rc genhtml_branch_coverage=1
00:11:33.440  		--rc genhtml_function_coverage=1
00:11:33.440  		--rc genhtml_legend=1
00:11:33.440  		--rc geninfo_all_blocks=1
00:11:33.440  		--rc geninfo_unexecuted_blocks=1
00:11:33.440  		
00:11:33.440  		'
00:11:33.440    13:48:16 blockdev_general -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:33.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:33.440  		--rc genhtml_branch_coverage=1
00:11:33.440  		--rc genhtml_function_coverage=1
00:11:33.440  		--rc genhtml_legend=1
00:11:33.440  		--rc geninfo_all_blocks=1
00:11:33.440  		--rc geninfo_unexecuted_blocks=1
00:11:33.440  		
00:11:33.440  		'
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:11:33.440    13:48:16 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@20 -- # :
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:11:33.440    13:48:16 blockdev_general -- bdev/blockdev.sh@711 -- # uname -s
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@719 -- # test_type=bdev
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@720 -- # crypto_device=
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@721 -- # dek=
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@722 -- # env_ctx=
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@727 -- # [[ bdev == bdev ]]
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@728 -- # wait_for_rpc=--wait-for-rpc
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72165
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:33.440   13:48:16 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 72165
00:11:33.440   13:48:16 blockdev_general -- common/autotest_common.sh@835 -- # '[' -z 72165 ']'
00:11:33.440   13:48:16 blockdev_general -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:33.440   13:48:16 blockdev_general -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:33.440  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:33.441   13:48:16 blockdev_general -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:33.441   13:48:16 blockdev_general -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:33.441   13:48:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:33.441   13:48:16 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc
00:11:33.441  [2024-12-11 13:48:16.144903] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:33.441  [2024-12-11 13:48:16.145093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72165 ]
00:11:33.699  [2024-12-11 13:48:16.342114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:33.699  [2024-12-11 13:48:16.475299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:34.636   13:48:17 blockdev_general -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:34.636   13:48:17 blockdev_general -- common/autotest_common.sh@868 -- # return 0
00:11:34.636   13:48:17 blockdev_general -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:11:34.636   13:48:17 blockdev_general -- bdev/blockdev.sh@733 -- # setup_bdev_conf
00:11:34.636   13:48:17 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd
00:11:34.636   13:48:17 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:34.636   13:48:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:35.573  [2024-12-11 13:48:18.090269] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:35.573  [2024-12-11 13:48:18.090341] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:35.573  
00:11:35.573  [2024-12-11 13:48:18.098231] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:35.573  [2024-12-11 13:48:18.098278] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:35.573  
00:11:35.573  Malloc0
00:11:35.573  Malloc1
00:11:35.573  Malloc2
00:11:35.573  Malloc3
00:11:35.573  Malloc4
00:11:35.832  Malloc5
00:11:35.832  Malloc6
00:11:35.832  Malloc7
00:11:35.832  Malloc8
00:11:35.832  Malloc9
00:11:35.832  [2024-12-11 13:48:18.595019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:35.832  [2024-12-11 13:48:18.595094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:35.832  [2024-12-11 13:48:18.595126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680
00:11:35.832  [2024-12-11 13:48:18.595138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:35.832  [2024-12-11 13:48:18.597563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:35.832  [2024-12-11 13:48:18.597608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:35.832  TestPT
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.090   13:48:18 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000
00:11:36.090  5000+0 records in
00:11:36.090  5000+0 records out
00:11:36.090  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0307491 s, 333 MB/s
00:11:36.090   13:48:18 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:36.090  AIO0
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.090   13:48:18 blockdev_general -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:36.090   13:48:18 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.090   13:48:18 blockdev_general -- bdev/blockdev.sh@777 -- # cat
00:11:36.090    13:48:18 blockdev_general -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:11:36.090    13:48:18 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.090    13:48:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:36.090    13:48:18 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.090    13:48:18 blockdev_general -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.091    13:48:18 blockdev_general -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.091   13:48:18 blockdev_general -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:11:36.091    13:48:18 blockdev_general -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:11:36.091    13:48:18 blockdev_general -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.091    13:48:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:36.350    13:48:19 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:36.350   13:48:19 blockdev_general -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:11:36.350    13:48:19 blockdev_general -- bdev/blockdev.sh@786 -- # jq -r .name
00:11:36.352    13:48:19 blockdev_general -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "86b4cd58-aa87-4d4b-8938-a81a79f83055"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "86b4cd58-aa87-4d4b-8938-a81a79f83055",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "0d5f80bd-b3c6-583b-9a5b-f1affbfef44c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "0d5f80bd-b3c6-583b-9a5b-f1affbfef44c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "73ef7338-0224-5ff2-acb1-730c965b8c2f"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "73ef7338-0224-5ff2-acb1-730c965b8c2f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "0105f1dc-e965-551b-931a-2216de4a6396"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "0105f1dc-e965-551b-931a-2216de4a6396",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "ad7801da-7068-5127-8028-b634e4bb770b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ad7801da-7068-5127-8028-b634e4bb770b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "9adc5819-a822-5c33-a1e4-d2e0b9ddf0fe"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9adc5819-a822-5c33-a1e4-d2e0b9ddf0fe",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "f2644526-09b0-5238-9824-374520e38da8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "f2644526-09b0-5238-9824-374520e38da8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "0a1185a6-8cb2-5ddb-b587-c951271df723"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "0a1185a6-8cb2-5ddb-b587-c951271df723",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "3f35c61d-9335-57e5-83e9-05bdb1536ca4"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "3f35c61d-9335-57e5-83e9-05bdb1536ca4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "9c3155bf-6960-5f9a-b6c6-61a883c16bbf"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9c3155bf-6960-5f9a-b6c6-61a883c16bbf",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "2d369299-95c0-526f-964f-9d8bbd86db18"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "2d369299-95c0-526f-964f-9d8bbd86db18",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "af47e69e-bf97-5551-baf0-017b04bffbc5"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "af47e69e-bf97-5551-baf0-017b04bffbc5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "5b20c442-f780-4d5b-9b3d-7bf70849a691"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "5b20c442-f780-4d5b-9b3d-7bf70849a691",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "5b20c442-f780-4d5b-9b3d-7bf70849a691",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "4afe5069-2858-4902-bc3a-12757644d491",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "14d35e88-2f34-46e6-aecc-e750810f5474",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "1096aeac-e469-4930-99ef-6919a8959cfa"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "1096aeac-e469-4930-99ef-6919a8959cfa",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "1096aeac-e469-4930-99ef-6919a8959cfa",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "900b2ba4-e2c9-4c0a-b82b-2bd92d6f549c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "8255ac82-04fa-4e7f-8db4-e58da37698e3",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "d35e5097-6105-45e3-860a-e59432cbd0db",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "cffecf5b-fda4-4bb9-9f1b-e622e55e833a",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "10cec670-8c00-470c-993a-11de71d37c1c"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "10cec670-8c00-470c-993a-11de71d37c1c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:11:36.352   13:48:19 blockdev_general -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:11:36.352   13:48:19 blockdev_general -- bdev/blockdev.sh@789 -- # hello_world_bdev=Malloc0
00:11:36.352   13:48:19 blockdev_general -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:11:36.352   13:48:19 blockdev_general -- bdev/blockdev.sh@791 -- # killprocess 72165
00:11:36.352   13:48:19 blockdev_general -- common/autotest_common.sh@954 -- # '[' -z 72165 ']'
00:11:36.352   13:48:19 blockdev_general -- common/autotest_common.sh@958 -- # kill -0 72165
00:11:36.352    13:48:19 blockdev_general -- common/autotest_common.sh@959 -- # uname
00:11:36.352   13:48:19 blockdev_general -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:36.610    13:48:19 blockdev_general -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72165
00:11:36.610   13:48:19 blockdev_general -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:36.610   13:48:19 blockdev_general -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:36.610  killing process with pid 72165
00:11:36.610   13:48:19 blockdev_general -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72165'
00:11:36.610   13:48:19 blockdev_general -- common/autotest_common.sh@973 -- # kill 72165
00:11:36.611   13:48:19 blockdev_general -- common/autotest_common.sh@978 -- # wait 72165
00:11:39.898   13:48:22 blockdev_general -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:11:39.898   13:48:22 blockdev_general -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:11:39.898   13:48:22 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:11:39.898   13:48:22 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:39.898   13:48:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:39.898  ************************************
00:11:39.898  START TEST bdev_hello_world
00:11:39.898  ************************************
00:11:39.898   13:48:22 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:11:40.158  [2024-12-11 13:48:22.751328] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:40.158  [2024-12-11 13:48:22.751463] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72265 ]
00:11:40.158  [2024-12-11 13:48:22.920807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:40.417  [2024-12-11 13:48:23.052595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:40.984  [2024-12-11 13:48:23.572215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:40.984  [2024-12-11 13:48:23.572292] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:40.984  [2024-12-11 13:48:23.580161] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:40.984  [2024-12-11 13:48:23.580215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:40.984  [2024-12-11 13:48:23.588152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:40.984  [2024-12-11 13:48:23.588195] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:40.984  [2024-12-11 13:48:23.588210] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:41.243  [2024-12-11 13:48:23.823643] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:41.243  [2024-12-11 13:48:23.823713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:41.243  [2024-12-11 13:48:23.823749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80
00:11:41.243  [2024-12-11 13:48:23.823767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:41.243  [2024-12-11 13:48:23.826650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:41.243  [2024-12-11 13:48:23.826695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:41.502  [2024-12-11 13:48:24.175455] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:11:41.502  [2024-12-11 13:48:24.175532] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0
00:11:41.502  [2024-12-11 13:48:24.175591] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:11:41.502  [2024-12-11 13:48:24.175688] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:11:41.502  [2024-12-11 13:48:24.175749] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:11:41.502  [2024-12-11 13:48:24.175771] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:11:41.502  [2024-12-11 13:48:24.175814] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:11:41.502  
00:11:41.502  [2024-12-11 13:48:24.175844] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:11:44.043  
00:11:44.043  real	0m3.941s
00:11:44.043  user	0m3.410s
00:11:44.043  sys	0m0.403s
00:11:44.043   13:48:26 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:44.043   13:48:26 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:11:44.043  ************************************
00:11:44.043  END TEST bdev_hello_world
00:11:44.043  ************************************
00:11:44.043   13:48:26 blockdev_general -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:11:44.043   13:48:26 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:44.043   13:48:26 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:44.043   13:48:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:44.043  ************************************
00:11:44.043  START TEST bdev_bounds
00:11:44.043  ************************************
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72325
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72325'
00:11:44.043  Process bdevio pid: 72325
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72325
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72325 ']'
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:44.043  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:44.043   13:48:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:11:44.043  [2024-12-11 13:48:26.773939] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:44.043  [2024-12-11 13:48:26.774626] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72325 ]
00:11:44.356  [2024-12-11 13:48:26.966856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:44.356  [2024-12-11 13:48:27.106278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:44.356  [2024-12-11 13:48:27.106302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:44.356  [2024-12-11 13:48:27.106307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:44.922  [2024-12-11 13:48:27.581149] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:44.922  [2024-12-11 13:48:27.581218] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:44.922  [2024-12-11 13:48:27.589111] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:44.922  [2024-12-11 13:48:27.589158] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:44.922  [2024-12-11 13:48:27.597098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:44.922  [2024-12-11 13:48:27.597137] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:44.922  [2024-12-11 13:48:27.597155] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:45.181  [2024-12-11 13:48:27.813204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:45.181  [2024-12-11 13:48:27.813271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:45.181  [2024-12-11 13:48:27.813291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80
00:11:45.181  [2024-12-11 13:48:27.813304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:45.181  [2024-12-11 13:48:27.815975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:45.181  [2024-12-11 13:48:27.816016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:45.440   13:48:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:45.440   13:48:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:11:45.440   13:48:28 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:11:45.699  I/O targets:
00:11:45.699    Malloc0: 65536 blocks of 512 bytes (32 MiB)
00:11:45.699    Malloc1p0: 32768 blocks of 512 bytes (16 MiB)
00:11:45.699    Malloc1p1: 32768 blocks of 512 bytes (16 MiB)
00:11:45.699    Malloc2p0: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p1: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p2: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p3: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p4: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p5: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p6: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    Malloc2p7: 8192 blocks of 512 bytes (4 MiB)
00:11:45.699    TestPT: 65536 blocks of 512 bytes (32 MiB)
00:11:45.699    raid0: 131072 blocks of 512 bytes (64 MiB)
00:11:45.699    concat0: 131072 blocks of 512 bytes (64 MiB)
00:11:45.699    raid1: 65536 blocks of 512 bytes (32 MiB)
00:11:45.699    AIO0: 5000 blocks of 2048 bytes (10 MiB)
00:11:45.699  
00:11:45.699  
00:11:45.699       CUnit - A unit testing framework for C - Version 2.1-3
00:11:45.699       http://cunit.sourceforge.net/
00:11:45.699  
00:11:45.699  
00:11:45.699  Suite: bdevio tests on: AIO0
00:11:45.699    Test: blockdev write read block ...passed
00:11:45.699    Test: blockdev write zeroes read block ...passed
00:11:45.699    Test: blockdev write zeroes read no split ...passed
00:11:45.699    Test: blockdev write zeroes read split ...passed
00:11:45.699    Test: blockdev write zeroes read split partial ...passed
00:11:45.699    Test: blockdev reset ...passed
00:11:45.699    Test: blockdev write read 8 blocks ...passed
00:11:45.699    Test: blockdev write read size > 128k ...passed
00:11:45.699    Test: blockdev write read invalid size ...passed
00:11:45.699    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:45.699    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:45.699    Test: blockdev write read max offset ...passed
00:11:45.699    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:45.699    Test: blockdev writev readv 8 blocks ...passed
00:11:45.699    Test: blockdev writev readv 30 x 1block ...passed
00:11:45.699    Test: blockdev writev readv block ...passed
00:11:45.699    Test: blockdev writev readv size > 128k ...passed
00:11:45.699    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:45.699    Test: blockdev comparev and writev ...passed
00:11:45.699    Test: blockdev nvme passthru rw ...passed
00:11:45.699    Test: blockdev nvme passthru vendor specific ...passed
00:11:45.699    Test: blockdev nvme admin passthru ...passed
00:11:45.699    Test: blockdev copy ...passed
00:11:45.699  Suite: bdevio tests on: raid1
00:11:45.700    Test: blockdev write read block ...passed
00:11:45.700    Test: blockdev write zeroes read block ...passed
00:11:45.700    Test: blockdev write zeroes read no split ...passed
00:11:45.700    Test: blockdev write zeroes read split ...passed
00:11:45.700    Test: blockdev write zeroes read split partial ...passed
00:11:45.700    Test: blockdev reset ...passed
00:11:45.700    Test: blockdev write read 8 blocks ...passed
00:11:45.700    Test: blockdev write read size > 128k ...passed
00:11:45.700    Test: blockdev write read invalid size ...passed
00:11:45.700    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:45.700    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:45.700    Test: blockdev write read max offset ...passed
00:11:45.700    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:45.700    Test: blockdev writev readv 8 blocks ...passed
00:11:45.700    Test: blockdev writev readv 30 x 1block ...passed
00:11:45.700    Test: blockdev writev readv block ...passed
00:11:45.700    Test: blockdev writev readv size > 128k ...passed
00:11:45.700    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:45.700    Test: blockdev comparev and writev ...passed
00:11:45.700    Test: blockdev nvme passthru rw ...passed
00:11:45.700    Test: blockdev nvme passthru vendor specific ...passed
00:11:45.700    Test: blockdev nvme admin passthru ...passed
00:11:45.700    Test: blockdev copy ...passed
00:11:45.700  Suite: bdevio tests on: concat0
00:11:45.700    Test: blockdev write read block ...passed
00:11:45.700    Test: blockdev write zeroes read block ...passed
00:11:45.700    Test: blockdev write zeroes read no split ...passed
00:11:45.959    Test: blockdev write zeroes read split ...passed
00:11:45.959    Test: blockdev write zeroes read split partial ...passed
00:11:45.959    Test: blockdev reset ...passed
00:11:45.959    Test: blockdev write read 8 blocks ...passed
00:11:45.959    Test: blockdev write read size > 128k ...passed
00:11:45.959    Test: blockdev write read invalid size ...passed
00:11:45.959    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:45.959    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:45.959    Test: blockdev write read max offset ...passed
00:11:45.959    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:45.959    Test: blockdev writev readv 8 blocks ...passed
00:11:45.959    Test: blockdev writev readv 30 x 1block ...passed
00:11:45.959    Test: blockdev writev readv block ...passed
00:11:45.959    Test: blockdev writev readv size > 128k ...passed
00:11:45.959    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:45.959    Test: blockdev comparev and writev ...passed
00:11:45.959    Test: blockdev nvme passthru rw ...passed
00:11:45.959    Test: blockdev nvme passthru vendor specific ...passed
00:11:45.959    Test: blockdev nvme admin passthru ...passed
00:11:45.959    Test: blockdev copy ...passed
00:11:45.959  Suite: bdevio tests on: raid0
00:11:45.959    Test: blockdev write read block ...passed
00:11:45.959    Test: blockdev write zeroes read block ...passed
00:11:45.959    Test: blockdev write zeroes read no split ...passed
00:11:45.959    Test: blockdev write zeroes read split ...passed
00:11:45.959    Test: blockdev write zeroes read split partial ...passed
00:11:45.959    Test: blockdev reset ...passed
00:11:45.959    Test: blockdev write read 8 blocks ...passed
00:11:45.959    Test: blockdev write read size > 128k ...passed
00:11:45.959    Test: blockdev write read invalid size ...passed
00:11:45.959    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:45.959    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:45.959    Test: blockdev write read max offset ...passed
00:11:45.959    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:45.959    Test: blockdev writev readv 8 blocks ...passed
00:11:45.959    Test: blockdev writev readv 30 x 1block ...passed
00:11:45.959    Test: blockdev writev readv block ...passed
00:11:45.959    Test: blockdev writev readv size > 128k ...passed
00:11:45.959    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:45.959    Test: blockdev comparev and writev ...passed
00:11:45.959    Test: blockdev nvme passthru rw ...passed
00:11:45.959    Test: blockdev nvme passthru vendor specific ...passed
00:11:45.959    Test: blockdev nvme admin passthru ...passed
00:11:45.959    Test: blockdev copy ...passed
00:11:45.959  Suite: bdevio tests on: TestPT
00:11:45.959    Test: blockdev write read block ...passed
00:11:45.959    Test: blockdev write zeroes read block ...passed
00:11:45.959    Test: blockdev write zeroes read no split ...passed
00:11:45.959    Test: blockdev write zeroes read split ...passed
00:11:45.959    Test: blockdev write zeroes read split partial ...passed
00:11:45.959    Test: blockdev reset ...passed
00:11:45.959    Test: blockdev write read 8 blocks ...passed
00:11:45.959    Test: blockdev write read size > 128k ...passed
00:11:45.959    Test: blockdev write read invalid size ...passed
00:11:45.959    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:45.959    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:45.959    Test: blockdev write read max offset ...passed
00:11:45.959    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:45.959    Test: blockdev writev readv 8 blocks ...passed
00:11:45.959    Test: blockdev writev readv 30 x 1block ...passed
00:11:45.959    Test: blockdev writev readv block ...passed
00:11:45.959    Test: blockdev writev readv size > 128k ...passed
00:11:45.959    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:45.959    Test: blockdev comparev and writev ...passed
00:11:45.959    Test: blockdev nvme passthru rw ...passed
00:11:45.959    Test: blockdev nvme passthru vendor specific ...passed
00:11:45.959    Test: blockdev nvme admin passthru ...passed
00:11:45.959    Test: blockdev copy ...passed
00:11:45.959  Suite: bdevio tests on: Malloc2p7
00:11:45.959    Test: blockdev write read block ...passed
00:11:45.959    Test: blockdev write zeroes read block ...passed
00:11:45.959    Test: blockdev write zeroes read no split ...passed
00:11:45.959    Test: blockdev write zeroes read split ...passed
00:11:46.219    Test: blockdev write zeroes read split partial ...passed
00:11:46.219    Test: blockdev reset ...passed
00:11:46.219    Test: blockdev write read 8 blocks ...passed
00:11:46.219    Test: blockdev write read size > 128k ...passed
00:11:46.219    Test: blockdev write read invalid size ...passed
00:11:46.219    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.219    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.219    Test: blockdev write read max offset ...passed
00:11:46.219    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.219    Test: blockdev writev readv 8 blocks ...passed
00:11:46.219    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.219    Test: blockdev writev readv block ...passed
00:11:46.219    Test: blockdev writev readv size > 128k ...passed
00:11:46.219    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.219    Test: blockdev comparev and writev ...passed
00:11:46.219    Test: blockdev nvme passthru rw ...passed
00:11:46.219    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.219    Test: blockdev nvme admin passthru ...passed
00:11:46.219    Test: blockdev copy ...passed
00:11:46.219  Suite: bdevio tests on: Malloc2p6
00:11:46.219    Test: blockdev write read block ...passed
00:11:46.219    Test: blockdev write zeroes read block ...passed
00:11:46.219    Test: blockdev write zeroes read no split ...passed
00:11:46.219    Test: blockdev write zeroes read split ...passed
00:11:46.219    Test: blockdev write zeroes read split partial ...passed
00:11:46.219    Test: blockdev reset ...passed
00:11:46.219    Test: blockdev write read 8 blocks ...passed
00:11:46.219    Test: blockdev write read size > 128k ...passed
00:11:46.219    Test: blockdev write read invalid size ...passed
00:11:46.219    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.219    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.219    Test: blockdev write read max offset ...passed
00:11:46.219    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.219    Test: blockdev writev readv 8 blocks ...passed
00:11:46.219    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.219    Test: blockdev writev readv block ...passed
00:11:46.219    Test: blockdev writev readv size > 128k ...passed
00:11:46.219    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.219    Test: blockdev comparev and writev ...passed
00:11:46.219    Test: blockdev nvme passthru rw ...passed
00:11:46.219    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.219    Test: blockdev nvme admin passthru ...passed
00:11:46.219    Test: blockdev copy ...passed
00:11:46.219  Suite: bdevio tests on: Malloc2p5
00:11:46.219    Test: blockdev write read block ...passed
00:11:46.219    Test: blockdev write zeroes read block ...passed
00:11:46.219    Test: blockdev write zeroes read no split ...passed
00:11:46.219    Test: blockdev write zeroes read split ...passed
00:11:46.219    Test: blockdev write zeroes read split partial ...passed
00:11:46.219    Test: blockdev reset ...passed
00:11:46.219    Test: blockdev write read 8 blocks ...passed
00:11:46.219    Test: blockdev write read size > 128k ...passed
00:11:46.219    Test: blockdev write read invalid size ...passed
00:11:46.219    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.219    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.219    Test: blockdev write read max offset ...passed
00:11:46.219    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.219    Test: blockdev writev readv 8 blocks ...passed
00:11:46.219    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.219    Test: blockdev writev readv block ...passed
00:11:46.219    Test: blockdev writev readv size > 128k ...passed
00:11:46.219    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.219    Test: blockdev comparev and writev ...passed
00:11:46.219    Test: blockdev nvme passthru rw ...passed
00:11:46.219    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.219    Test: blockdev nvme admin passthru ...passed
00:11:46.219    Test: blockdev copy ...passed
00:11:46.219  Suite: bdevio tests on: Malloc2p4
00:11:46.219    Test: blockdev write read block ...passed
00:11:46.219    Test: blockdev write zeroes read block ...passed
00:11:46.219    Test: blockdev write zeroes read no split ...passed
00:11:46.219    Test: blockdev write zeroes read split ...passed
00:11:46.219    Test: blockdev write zeroes read split partial ...passed
00:11:46.219    Test: blockdev reset ...passed
00:11:46.219    Test: blockdev write read 8 blocks ...passed
00:11:46.219    Test: blockdev write read size > 128k ...passed
00:11:46.219    Test: blockdev write read invalid size ...passed
00:11:46.219    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.219    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.219    Test: blockdev write read max offset ...passed
00:11:46.219    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.219    Test: blockdev writev readv 8 blocks ...passed
00:11:46.219    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.219    Test: blockdev writev readv block ...passed
00:11:46.219    Test: blockdev writev readv size > 128k ...passed
00:11:46.219    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.219    Test: blockdev comparev and writev ...passed
00:11:46.219    Test: blockdev nvme passthru rw ...passed
00:11:46.219    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.219    Test: blockdev nvme admin passthru ...passed
00:11:46.219    Test: blockdev copy ...passed
00:11:46.219  Suite: bdevio tests on: Malloc2p3
00:11:46.219    Test: blockdev write read block ...passed
00:11:46.219    Test: blockdev write zeroes read block ...passed
00:11:46.219    Test: blockdev write zeroes read no split ...passed
00:11:46.479    Test: blockdev write zeroes read split ...passed
00:11:46.479    Test: blockdev write zeroes read split partial ...passed
00:11:46.479    Test: blockdev reset ...passed
00:11:46.479    Test: blockdev write read 8 blocks ...passed
00:11:46.479    Test: blockdev write read size > 128k ...passed
00:11:46.479    Test: blockdev write read invalid size ...passed
00:11:46.479    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.479    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.479    Test: blockdev write read max offset ...passed
00:11:46.479    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.479    Test: blockdev writev readv 8 blocks ...passed
00:11:46.479    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.479    Test: blockdev writev readv block ...passed
00:11:46.479    Test: blockdev writev readv size > 128k ...passed
00:11:46.479    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.479    Test: blockdev comparev and writev ...passed
00:11:46.479    Test: blockdev nvme passthru rw ...passed
00:11:46.479    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.479    Test: blockdev nvme admin passthru ...passed
00:11:46.479    Test: blockdev copy ...passed
00:11:46.479  Suite: bdevio tests on: Malloc2p2
00:11:46.479    Test: blockdev write read block ...passed
00:11:46.479    Test: blockdev write zeroes read block ...passed
00:11:46.479    Test: blockdev write zeroes read no split ...passed
00:11:46.479    Test: blockdev write zeroes read split ...passed
00:11:46.479    Test: blockdev write zeroes read split partial ...passed
00:11:46.479    Test: blockdev reset ...passed
00:11:46.479    Test: blockdev write read 8 blocks ...passed
00:11:46.479    Test: blockdev write read size > 128k ...passed
00:11:46.479    Test: blockdev write read invalid size ...passed
00:11:46.479    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.479    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.479    Test: blockdev write read max offset ...passed
00:11:46.479    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.479    Test: blockdev writev readv 8 blocks ...passed
00:11:46.479    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.479    Test: blockdev writev readv block ...passed
00:11:46.479    Test: blockdev writev readv size > 128k ...passed
00:11:46.479    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.479    Test: blockdev comparev and writev ...passed
00:11:46.479    Test: blockdev nvme passthru rw ...passed
00:11:46.479    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.479    Test: blockdev nvme admin passthru ...passed
00:11:46.479    Test: blockdev copy ...passed
00:11:46.479  Suite: bdevio tests on: Malloc2p1
00:11:46.479    Test: blockdev write read block ...passed
00:11:46.479    Test: blockdev write zeroes read block ...passed
00:11:46.479    Test: blockdev write zeroes read no split ...passed
00:11:46.479    Test: blockdev write zeroes read split ...passed
00:11:46.479    Test: blockdev write zeroes read split partial ...passed
00:11:46.479    Test: blockdev reset ...passed
00:11:46.479    Test: blockdev write read 8 blocks ...passed
00:11:46.479    Test: blockdev write read size > 128k ...passed
00:11:46.479    Test: blockdev write read invalid size ...passed
00:11:46.479    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.479    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.479    Test: blockdev write read max offset ...passed
00:11:46.479    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.479    Test: blockdev writev readv 8 blocks ...passed
00:11:46.479    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.479    Test: blockdev writev readv block ...passed
00:11:46.479    Test: blockdev writev readv size > 128k ...passed
00:11:46.479    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.479    Test: blockdev comparev and writev ...passed
00:11:46.479    Test: blockdev nvme passthru rw ...passed
00:11:46.479    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.479    Test: blockdev nvme admin passthru ...passed
00:11:46.479    Test: blockdev copy ...passed
00:11:46.479  Suite: bdevio tests on: Malloc2p0
00:11:46.479    Test: blockdev write read block ...passed
00:11:46.479    Test: blockdev write zeroes read block ...passed
00:11:46.479    Test: blockdev write zeroes read no split ...passed
00:11:46.479    Test: blockdev write zeroes read split ...passed
00:11:46.739    Test: blockdev write zeroes read split partial ...passed
00:11:46.739    Test: blockdev reset ...passed
00:11:46.739    Test: blockdev write read 8 blocks ...passed
00:11:46.739    Test: blockdev write read size > 128k ...passed
00:11:46.739    Test: blockdev write read invalid size ...passed
00:11:46.739    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.739    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.739    Test: blockdev write read max offset ...passed
00:11:46.739    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.739    Test: blockdev writev readv 8 blocks ...passed
00:11:46.739    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.739    Test: blockdev writev readv block ...passed
00:11:46.739    Test: blockdev writev readv size > 128k ...passed
00:11:46.739    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.739    Test: blockdev comparev and writev ...passed
00:11:46.739    Test: blockdev nvme passthru rw ...passed
00:11:46.739    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.739    Test: blockdev nvme admin passthru ...passed
00:11:46.739    Test: blockdev copy ...passed
00:11:46.739  Suite: bdevio tests on: Malloc1p1
00:11:46.739    Test: blockdev write read block ...passed
00:11:46.739    Test: blockdev write zeroes read block ...passed
00:11:46.739    Test: blockdev write zeroes read no split ...passed
00:11:46.739    Test: blockdev write zeroes read split ...passed
00:11:46.739    Test: blockdev write zeroes read split partial ...passed
00:11:46.739    Test: blockdev reset ...passed
00:11:46.739    Test: blockdev write read 8 blocks ...passed
00:11:46.739    Test: blockdev write read size > 128k ...passed
00:11:46.739    Test: blockdev write read invalid size ...passed
00:11:46.739    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.739    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.739    Test: blockdev write read max offset ...passed
00:11:46.739    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.739    Test: blockdev writev readv 8 blocks ...passed
00:11:46.739    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.739    Test: blockdev writev readv block ...passed
00:11:46.739    Test: blockdev writev readv size > 128k ...passed
00:11:46.739    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.739    Test: blockdev comparev and writev ...passed
00:11:46.739    Test: blockdev nvme passthru rw ...passed
00:11:46.739    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.739    Test: blockdev nvme admin passthru ...passed
00:11:46.739    Test: blockdev copy ...passed
00:11:46.739  Suite: bdevio tests on: Malloc1p0
00:11:46.739    Test: blockdev write read block ...passed
00:11:46.739    Test: blockdev write zeroes read block ...passed
00:11:46.739    Test: blockdev write zeroes read no split ...passed
00:11:46.739    Test: blockdev write zeroes read split ...passed
00:11:46.739    Test: blockdev write zeroes read split partial ...passed
00:11:46.739    Test: blockdev reset ...passed
00:11:46.739    Test: blockdev write read 8 blocks ...passed
00:11:46.739    Test: blockdev write read size > 128k ...passed
00:11:46.739    Test: blockdev write read invalid size ...passed
00:11:46.739    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.739    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.739    Test: blockdev write read max offset ...passed
00:11:46.739    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.739    Test: blockdev writev readv 8 blocks ...passed
00:11:46.739    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.739    Test: blockdev writev readv block ...passed
00:11:46.739    Test: blockdev writev readv size > 128k ...passed
00:11:46.739    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.739    Test: blockdev comparev and writev ...passed
00:11:46.739    Test: blockdev nvme passthru rw ...passed
00:11:46.739    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.739    Test: blockdev nvme admin passthru ...passed
00:11:46.739    Test: blockdev copy ...passed
00:11:46.739  Suite: bdevio tests on: Malloc0
00:11:46.739    Test: blockdev write read block ...passed
00:11:46.739    Test: blockdev write zeroes read block ...passed
00:11:46.739    Test: blockdev write zeroes read no split ...passed
00:11:46.739    Test: blockdev write zeroes read split ...passed
00:11:46.739    Test: blockdev write zeroes read split partial ...passed
00:11:46.739    Test: blockdev reset ...passed
00:11:46.739    Test: blockdev write read 8 blocks ...passed
00:11:46.739    Test: blockdev write read size > 128k ...passed
00:11:46.739    Test: blockdev write read invalid size ...passed
00:11:46.739    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:46.739    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:46.739    Test: blockdev write read max offset ...passed
00:11:46.739    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:46.739    Test: blockdev writev readv 8 blocks ...passed
00:11:46.739    Test: blockdev writev readv 30 x 1block ...passed
00:11:46.739    Test: blockdev writev readv block ...passed
00:11:46.739    Test: blockdev writev readv size > 128k ...passed
00:11:46.739    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:46.739    Test: blockdev comparev and writev ...passed
00:11:46.739    Test: blockdev nvme passthru rw ...passed
00:11:46.739    Test: blockdev nvme passthru vendor specific ...passed
00:11:46.739    Test: blockdev nvme admin passthru ...passed
00:11:46.739    Test: blockdev copy ...passed
00:11:46.739  
00:11:46.739  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:46.739                suites     16     16    n/a      0        0
00:11:46.739                 tests    368    368    368      0        0
00:11:46.739               asserts   2224   2224   2224      0      n/a
00:11:46.739  
00:11:46.739  Elapsed time =    3.535 seconds
00:11:46.998  0
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72325
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72325 ']'
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72325
00:11:46.998    13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:46.998    13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72325
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:46.998  killing process with pid 72325
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72325'
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72325
00:11:46.998   13:48:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72325
00:11:49.527   13:48:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:11:49.527  
00:11:49.527  real	0m5.029s
00:11:49.527  user	0m13.155s
00:11:49.527  sys	0m0.723s
00:11:49.527   13:48:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:49.527   13:48:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:11:49.527  ************************************
00:11:49.527  END TEST bdev_bounds
00:11:49.527  ************************************
00:11:49.527   13:48:31 blockdev_general -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:11:49.527   13:48:31 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:49.527   13:48:31 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:49.527   13:48:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:49.527  ************************************
00:11:49.527  START TEST bdev_nbd
00:11:49.527  ************************************
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:11:49.527    13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72413
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72413 /var/tmp/spdk-nbd.sock
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72413 ']'
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:49.527  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:49.527   13:48:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:11:49.527  [2024-12-11 13:48:31.861539] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:11:49.527  [2024-12-11 13:48:31.861746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:49.527  [2024-12-11 13:48:32.058249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:49.527  [2024-12-11 13:48:32.207411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:50.094  [2024-12-11 13:48:32.752569] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:50.094  [2024-12-11 13:48:32.752659] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:50.094  [2024-12-11 13:48:32.760522] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:50.094  [2024-12-11 13:48:32.760576] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:50.094  [2024-12-11 13:48:32.768529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:50.094  [2024-12-11 13:48:32.768579] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:50.094  [2024-12-11 13:48:32.768595] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:50.353  [2024-12-11 13:48:33.009884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:50.353  [2024-12-11 13:48:33.010003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:50.353  [2024-12-11 13:48:33.010050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80
00:11:50.353  [2024-12-11 13:48:33.010071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:50.353  [2024-12-11 13:48:33.013606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:50.353  [2024-12-11 13:48:33.013681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:50.918   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:50.918   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:50.919    13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0
00:11:50.919   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:11:50.919    13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:51.177  1+0 records in
00:11:51.177  1+0 records out
00:11:51.177  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248998 s, 16.4 MB/s
00:11:51.177    13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:51.177   13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:51.177    13:48:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:11:51.435    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:51.435  1+0 records in
00:11:51.435  1+0 records out
00:11:51.435  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298444 s, 13.7 MB/s
00:11:51.435    13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:51.435   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:51.435    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:11:51.692    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:51.692  1+0 records in
00:11:51.692  1+0 records out
00:11:51.692  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318112 s, 12.9 MB/s
00:11:51.692    13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:51.692   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:51.692    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:11:51.950    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:51.950  1+0 records in
00:11:51.950  1+0 records out
00:11:51.950  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374856 s, 10.9 MB/s
00:11:51.950    13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:51.950   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:51.950    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:11:52.208    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:52.208  1+0 records in
00:11:52.208  1+0 records out
00:11:52.208  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307788 s, 13.3 MB/s
00:11:52.208    13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:52.208   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:52.466   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:52.466   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:52.466   13:48:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:52.466   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:52.466   13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:52.466    13:48:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:11:52.466    13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:52.466   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:52.466  1+0 records in
00:11:52.466  1+0 records out
00:11:52.466  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549176 s, 7.5 MB/s
00:11:52.466    13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:52.724   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:52.724   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:52.724   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:52.724   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:52.724   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:52.724   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:52.724    13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:11:52.983    13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:52.983  1+0 records in
00:11:52.983  1+0 records out
00:11:52.983  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459435 s, 8.9 MB/s
00:11:52.983    13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:52.983   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:52.983    13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7
00:11:53.242    13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:53.242  1+0 records in
00:11:53.242  1+0 records out
00:11:53.242  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500079 s, 8.2 MB/s
00:11:53.242    13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:53.242   13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:53.242    13:48:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8
00:11:53.501    13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:53.501  1+0 records in
00:11:53.501  1+0 records out
00:11:53.501  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433917 s, 9.4 MB/s
00:11:53.501    13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:53.501   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:53.501    13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9
00:11:53.760    13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:53.760  1+0 records in
00:11:53.760  1+0 records out
00:11:53.760  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545667 s, 7.5 MB/s
00:11:53.760    13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:53.760   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:53.760    13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7
00:11:54.018   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10
00:11:54.018    13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10
00:11:54.018   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10
00:11:54.018   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:11:54.018   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:54.018   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:54.018   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:54.019  1+0 records in
00:11:54.019  1+0 records out
00:11:54.019  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00557872 s, 734 kB/s
00:11:54.019    13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:54.019   13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:54.019    13:48:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11
00:11:54.586    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:54.586  1+0 records in
00:11:54.586  1+0 records out
00:11:54.586  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558601 s, 7.3 MB/s
00:11:54.586    13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:54.586   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:54.586    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12
00:11:54.844    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:54.844  1+0 records in
00:11:54.844  1+0 records out
00:11:54.844  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672174 s, 6.1 MB/s
00:11:54.844    13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:54.844   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:54.845   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:54.845   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:54.845   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:54.845   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:54.845    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13
00:11:55.102    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:55.102  1+0 records in
00:11:55.102  1+0 records out
00:11:55.102  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053639 s, 7.6 MB/s
00:11:55.102    13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:55.102   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:55.103    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14
00:11:55.361    13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:55.361   13:48:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:55.361  1+0 records in
00:11:55.361  1+0 records out
00:11:55.361  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674499 s, 6.1 MB/s
00:11:55.361    13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:55.361   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:55.361    13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15
00:11:55.619    13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:55.619  1+0 records in
00:11:55.619  1+0 records out
00:11:55.619  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010609 s, 3.9 MB/s
00:11:55.619    13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:11:55.619   13:48:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:11:55.620   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:55.620   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:55.620    13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:56.185   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd0",
00:11:56.185      "bdev_name": "Malloc0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd1",
00:11:56.185      "bdev_name": "Malloc1p0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd2",
00:11:56.185      "bdev_name": "Malloc1p1"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd3",
00:11:56.185      "bdev_name": "Malloc2p0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd4",
00:11:56.185      "bdev_name": "Malloc2p1"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd5",
00:11:56.185      "bdev_name": "Malloc2p2"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd6",
00:11:56.185      "bdev_name": "Malloc2p3"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd7",
00:11:56.185      "bdev_name": "Malloc2p4"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd8",
00:11:56.185      "bdev_name": "Malloc2p5"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd9",
00:11:56.185      "bdev_name": "Malloc2p6"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd10",
00:11:56.185      "bdev_name": "Malloc2p7"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd11",
00:11:56.185      "bdev_name": "TestPT"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd12",
00:11:56.185      "bdev_name": "raid0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd13",
00:11:56.185      "bdev_name": "concat0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd14",
00:11:56.185      "bdev_name": "raid1"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd15",
00:11:56.185      "bdev_name": "AIO0"
00:11:56.185    }
00:11:56.185  ]'
00:11:56.185   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:11:56.185    13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd0",
00:11:56.185      "bdev_name": "Malloc0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd1",
00:11:56.185      "bdev_name": "Malloc1p0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd2",
00:11:56.185      "bdev_name": "Malloc1p1"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd3",
00:11:56.185      "bdev_name": "Malloc2p0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd4",
00:11:56.185      "bdev_name": "Malloc2p1"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd5",
00:11:56.185      "bdev_name": "Malloc2p2"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd6",
00:11:56.185      "bdev_name": "Malloc2p3"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd7",
00:11:56.185      "bdev_name": "Malloc2p4"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd8",
00:11:56.185      "bdev_name": "Malloc2p5"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd9",
00:11:56.185      "bdev_name": "Malloc2p6"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd10",
00:11:56.185      "bdev_name": "Malloc2p7"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd11",
00:11:56.185      "bdev_name": "TestPT"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd12",
00:11:56.185      "bdev_name": "raid0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd13",
00:11:56.185      "bdev_name": "concat0"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd14",
00:11:56.185      "bdev_name": "raid1"
00:11:56.185    },
00:11:56.185    {
00:11:56.185      "nbd_device": "/dev/nbd15",
00:11:56.185      "bdev_name": "AIO0"
00:11:56.185    }
00:11:56.185  ]'
00:11:56.185    13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:11:56.185   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15'
00:11:56.185   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15')
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:56.186    13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:56.186   13:48:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:56.452    13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:56.452   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:11:56.717    13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:56.717   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:11:56.975    13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:56.975   13:48:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:11:57.540    13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:57.540   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:11:57.798    13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:57.798   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:11:58.056    13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:58.056   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:11:58.314    13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:58.314   13:48:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:11:58.572    13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:58.572   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:58.573   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:11:58.830    13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:58.830   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:11:59.089    13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:59.089   13:48:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:11:59.347    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:59.347   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:11:59.605    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:59.605   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:11:59.862    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:59.863   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:12:00.121    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:00.121   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:12:00.378    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:12:00.378   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:12:00.378   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:12:00.378   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:00.378   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:00.379   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:12:00.379   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:00.379   13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:00.379    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:12:00.379    13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:00.379     13:48:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:12:00.637    13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:12:00.637     13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:12:00.637     13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:12:00.637    13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:12:00.637     13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:12:00.637     13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:12:00.637     13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:12:00.637    13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:12:00.637    13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:00.637   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:12:00.895  /dev/nbd0
00:12:00.895    13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:00.895  1+0 records in
00:12:00.895  1+0 records out
00:12:00.895  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025316 s, 16.2 MB/s
00:12:00.895    13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:00.895   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1
00:12:01.153  /dev/nbd1
00:12:01.153    13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:01.153  1+0 records in
00:12:01.153  1+0 records out
00:12:01.153  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322719 s, 12.7 MB/s
00:12:01.153    13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:01.153   13:48:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10
00:12:01.411  /dev/nbd10
00:12:01.411    13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:01.411  1+0 records in
00:12:01.411  1+0 records out
00:12:01.411  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372405 s, 11.0 MB/s
00:12:01.411    13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:01.411   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11
00:12:01.669  /dev/nbd11
00:12:01.669    13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:12:01.669   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:01.670  1+0 records in
00:12:01.670  1+0 records out
00:12:01.670  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390569 s, 10.5 MB/s
00:12:01.670    13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:01.670   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12
00:12:01.927  /dev/nbd12
00:12:01.927    13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:01.927  1+0 records in
00:12:01.927  1+0 records out
00:12:01.927  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445914 s, 9.2 MB/s
00:12:01.927    13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:01.927   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13
00:12:02.185  /dev/nbd13
00:12:02.185    13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:12:02.185   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:12:02.185   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:12:02.185   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:02.185   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:02.185   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:02.185   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:02.443  1+0 records in
00:12:02.443  1+0 records out
00:12:02.443  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461927 s, 8.9 MB/s
00:12:02.443    13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:02.443   13:48:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14
00:12:02.443  /dev/nbd14
00:12:02.443    13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:02.443   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:02.443  1+0 records in
00:12:02.443  1+0 records out
00:12:02.443  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566958 s, 7.2 MB/s
00:12:02.444    13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:02.701   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15
00:12:02.958  /dev/nbd15
00:12:02.958    13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:02.958  1+0 records in
00:12:02.958  1+0 records out
00:12:02.958  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005573 s, 7.3 MB/s
00:12:02.958    13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:02.958   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2
00:12:03.216  /dev/nbd2
00:12:03.216    13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:03.216  1+0 records in
00:12:03.216  1+0 records out
00:12:03.216  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546462 s, 7.5 MB/s
00:12:03.216    13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:03.216   13:48:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3
00:12:03.473  /dev/nbd3
00:12:03.473    13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:03.473  1+0 records in
00:12:03.473  1+0 records out
00:12:03.473  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006298 s, 6.5 MB/s
00:12:03.473    13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:03.473   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4
00:12:03.731  /dev/nbd4
00:12:03.731    13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:03.731  1+0 records in
00:12:03.731  1+0 records out
00:12:03.731  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500362 s, 8.2 MB/s
00:12:03.731    13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:03.731   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5
00:12:03.989  /dev/nbd5
00:12:03.989    13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:04.247  1+0 records in
00:12:04.247  1+0 records out
00:12:04.247  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613489 s, 6.7 MB/s
00:12:04.247    13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:04.247   13:48:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6
00:12:04.505  /dev/nbd6
00:12:04.505    13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:04.505  1+0 records in
00:12:04.505  1+0 records out
00:12:04.505  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703523 s, 5.8 MB/s
00:12:04.505    13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:04.505   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7
00:12:04.763  /dev/nbd7
00:12:04.763    13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:04.763  1+0 records in
00:12:04.763  1+0 records out
00:12:04.763  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504197 s, 8.1 MB/s
00:12:04.763    13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:04.763   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8
00:12:05.021  /dev/nbd8
00:12:05.021    13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:05.021  1+0 records in
00:12:05.021  1+0 records out
00:12:05.021  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750719 s, 5.5 MB/s
00:12:05.021    13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:05.021   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9
00:12:05.279  /dev/nbd9
00:12:05.279    13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:05.279  1+0 records in
00:12:05.279  1+0 records out
00:12:05.279  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136616 s, 3.0 MB/s
00:12:05.279    13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:05.279   13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:12:05.279    13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:12:05.279    13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:05.279     13:48:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:12:05.540    13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:12:05.540    {
00:12:05.540      "nbd_device": "/dev/nbd0",
00:12:05.540      "bdev_name": "Malloc0"
00:12:05.540    },
00:12:05.540    {
00:12:05.540      "nbd_device": "/dev/nbd1",
00:12:05.540      "bdev_name": "Malloc1p0"
00:12:05.540    },
00:12:05.540    {
00:12:05.540      "nbd_device": "/dev/nbd10",
00:12:05.540      "bdev_name": "Malloc1p1"
00:12:05.540    },
00:12:05.540    {
00:12:05.540      "nbd_device": "/dev/nbd11",
00:12:05.540      "bdev_name": "Malloc2p0"
00:12:05.540    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd12",
00:12:05.541      "bdev_name": "Malloc2p1"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd13",
00:12:05.541      "bdev_name": "Malloc2p2"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd14",
00:12:05.541      "bdev_name": "Malloc2p3"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd15",
00:12:05.541      "bdev_name": "Malloc2p4"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd2",
00:12:05.541      "bdev_name": "Malloc2p5"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd3",
00:12:05.541      "bdev_name": "Malloc2p6"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd4",
00:12:05.541      "bdev_name": "Malloc2p7"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd5",
00:12:05.541      "bdev_name": "TestPT"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd6",
00:12:05.541      "bdev_name": "raid0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd7",
00:12:05.541      "bdev_name": "concat0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd8",
00:12:05.541      "bdev_name": "raid1"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd9",
00:12:05.541      "bdev_name": "AIO0"
00:12:05.541    }
00:12:05.541  ]'
00:12:05.541     13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd0",
00:12:05.541      "bdev_name": "Malloc0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd1",
00:12:05.541      "bdev_name": "Malloc1p0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd10",
00:12:05.541      "bdev_name": "Malloc1p1"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd11",
00:12:05.541      "bdev_name": "Malloc2p0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd12",
00:12:05.541      "bdev_name": "Malloc2p1"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd13",
00:12:05.541      "bdev_name": "Malloc2p2"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd14",
00:12:05.541      "bdev_name": "Malloc2p3"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd15",
00:12:05.541      "bdev_name": "Malloc2p4"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd2",
00:12:05.541      "bdev_name": "Malloc2p5"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd3",
00:12:05.541      "bdev_name": "Malloc2p6"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd4",
00:12:05.541      "bdev_name": "Malloc2p7"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd5",
00:12:05.541      "bdev_name": "TestPT"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd6",
00:12:05.541      "bdev_name": "raid0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd7",
00:12:05.541      "bdev_name": "concat0"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd8",
00:12:05.541      "bdev_name": "raid1"
00:12:05.541    },
00:12:05.541    {
00:12:05.541      "nbd_device": "/dev/nbd9",
00:12:05.541      "bdev_name": "AIO0"
00:12:05.541    }
00:12:05.541  ]'
00:12:05.541     13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:12:05.541    13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:12:05.541  /dev/nbd1
00:12:05.541  /dev/nbd10
00:12:05.541  /dev/nbd11
00:12:05.541  /dev/nbd12
00:12:05.541  /dev/nbd13
00:12:05.541  /dev/nbd14
00:12:05.541  /dev/nbd15
00:12:05.541  /dev/nbd2
00:12:05.541  /dev/nbd3
00:12:05.541  /dev/nbd4
00:12:05.541  /dev/nbd5
00:12:05.541  /dev/nbd6
00:12:05.541  /dev/nbd7
00:12:05.541  /dev/nbd8
00:12:05.541  /dev/nbd9'
00:12:05.541     13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:12:05.541  /dev/nbd1
00:12:05.541  /dev/nbd10
00:12:05.541  /dev/nbd11
00:12:05.541  /dev/nbd12
00:12:05.541  /dev/nbd13
00:12:05.541  /dev/nbd14
00:12:05.541  /dev/nbd15
00:12:05.541  /dev/nbd2
00:12:05.541  /dev/nbd3
00:12:05.541  /dev/nbd4
00:12:05.541  /dev/nbd5
00:12:05.541  /dev/nbd6
00:12:05.541  /dev/nbd7
00:12:05.541  /dev/nbd8
00:12:05.541  /dev/nbd9'
00:12:05.541     13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:12:05.541    13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16
00:12:05.541    13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']'
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:12:05.541  256+0 records in
00:12:05.541  256+0 records out
00:12:05.541  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655093 s, 160 MB/s
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:05.541   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:12:05.799  256+0 records in
00:12:05.799  256+0 records out
00:12:05.799  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181647 s, 5.8 MB/s
00:12:05.799   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:05.799   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:12:06.057  256+0 records in
00:12:06.057  256+0 records out
00:12:06.057  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185536 s, 5.7 MB/s
00:12:06.057   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:06.057   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:12:06.057  256+0 records in
00:12:06.057  256+0 records out
00:12:06.057  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190804 s, 5.5 MB/s
00:12:06.057   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:06.057   13:48:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:12:06.314  256+0 records in
00:12:06.314  256+0 records out
00:12:06.314  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188344 s, 5.6 MB/s
00:12:06.314   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:06.314   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:12:06.572  256+0 records in
00:12:06.572  256+0 records out
00:12:06.572  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185833 s, 5.6 MB/s
00:12:06.572   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:06.572   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:12:06.829  256+0 records in
00:12:06.829  256+0 records out
00:12:06.829  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187811 s, 5.6 MB/s
00:12:06.829   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:06.829   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:12:06.829  256+0 records in
00:12:06.829  256+0 records out
00:12:06.829  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189496 s, 5.5 MB/s
00:12:06.829   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:06.829   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct
00:12:07.086  256+0 records in
00:12:07.086  256+0 records out
00:12:07.086  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185945 s, 5.6 MB/s
00:12:07.086   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:07.086   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct
00:12:07.344  256+0 records in
00:12:07.344  256+0 records out
00:12:07.344  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186074 s, 5.6 MB/s
00:12:07.344   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:07.344   13:48:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct
00:12:07.602  256+0 records in
00:12:07.602  256+0 records out
00:12:07.602  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186932 s, 5.6 MB/s
00:12:07.602   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:07.602   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct
00:12:07.602  256+0 records in
00:12:07.602  256+0 records out
00:12:07.602  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184215 s, 5.7 MB/s
00:12:07.602   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:07.602   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct
00:12:07.860  256+0 records in
00:12:07.860  256+0 records out
00:12:07.860  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187624 s, 5.6 MB/s
00:12:07.860   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:07.860   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct
00:12:08.118  256+0 records in
00:12:08.118  256+0 records out
00:12:08.118  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186858 s, 5.6 MB/s
00:12:08.118   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:08.118   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct
00:12:08.376  256+0 records in
00:12:08.376  256+0 records out
00:12:08.376  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188803 s, 5.6 MB/s
00:12:08.376   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:08.376   13:48:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct
00:12:08.376  256+0 records in
00:12:08.376  256+0 records out
00:12:08.376  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18906 s, 5.5 MB/s
00:12:08.376   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:12:08.376   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct
00:12:08.943  256+0 records in
00:12:08.943  256+0 records out
00:12:08.943  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.269735 s, 3.9 MB/s
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:08.943   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:12:09.201    13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:12:09.201   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:12:09.201   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:09.202   13:48:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:12:09.459    13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:12:09.716    13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:12:09.716   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:09.717   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:12:09.975    13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:09.975   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:12:10.233    13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:12:10.233   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:12:10.233   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:12:10.233   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:10.233   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:10.233   13:48:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:12:10.233   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:10.233   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:10.233   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:10.233   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:12:10.491    13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:12:10.749    13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:10.749   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:12:11.315    13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:11.315   13:48:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:12:11.315    13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:11.315   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:12:11.572    13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:11.572   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:12:11.829    13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:11.829   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:12:12.086    13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:12.086   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:12:12.343    13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:12.343   13:48:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:12:12.599    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:12.599   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:12.600   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:12:12.857    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:12.857   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:12:13.114    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:13.114   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:13.114    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:12:13.114    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:13.115     13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:12:13.371    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:12:13.371     13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:12:13.371     13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:12:13.371    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:12:13.372     13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:12:13.372     13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:12:13.372     13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:12:13.372    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:12:13.372    13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:12:13.372   13:48:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:12:13.629  malloc_lvol_verify
00:12:13.629   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:12:13.629  ea2804f2-2680-4a14-ac74-851a705e766c
00:12:13.629   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:12:13.886  8ec9719f-97c1-4ee5-9313-ed8088133ce6
00:12:13.886   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:12:14.143  /dev/nbd0
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:12:14.143  mke2fs 1.47.0 (5-Feb-2023)
00:12:14.143  
00:12:14.143  Filesystem too small for a journal
00:12:14.143  Discarding device blocks:    0/1024         done                            
00:12:14.143  Creating filesystem with 1024 4k blocks and 1024 inodes
00:12:14.143  
00:12:14.143  Allocating group tables: 0/1   done                            
00:12:14.143  Writing inode tables: 0/1   done                            
00:12:14.143  Writing superblocks and filesystem accounting information: 0/1   done
00:12:14.143  
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:14.143   13:48:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:12:14.401    13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72413
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72413 ']'
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72413
00:12:14.401    13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:14.401    13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72413
00:12:14.401  killing process with pid 72413
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72413'
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72413
00:12:14.401   13:48:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72413
00:12:17.681   13:48:59 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:12:17.681  ************************************
00:12:17.681  END TEST bdev_nbd
00:12:17.681  ************************************
00:12:17.681  
00:12:17.681  real	0m28.070s
00:12:17.681  user	0m35.807s
00:12:17.681  sys	0m12.222s
00:12:17.681   13:48:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:17.681   13:48:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:12:17.681   13:48:59 blockdev_general -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:12:17.681   13:48:59 blockdev_general -- bdev/blockdev.sh@801 -- # '[' bdev = nvme ']'
00:12:17.681   13:48:59 blockdev_general -- bdev/blockdev.sh@801 -- # '[' bdev = gpt ']'
00:12:17.681   13:48:59 blockdev_general -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite ''
00:12:17.681   13:48:59 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:17.681   13:48:59 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:17.681   13:48:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:17.681  ************************************
00:12:17.681  START TEST bdev_fio
00:12:17.681  ************************************
00:12:17.681   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:12:17.681   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:12:17.681   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:12:17.681  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:12:17.681   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:12:17.681    13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:12:17.681    13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:12:17.681   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:12:17.682    13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]'
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:48:59 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]'
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]'
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]'
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]'
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:17.682   13:49:00 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:12:17.682  ************************************
00:12:17.682  START TEST bdev_fio_rw_verify
00:12:17.682  ************************************
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:17.682    13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:17.682    13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:12:17.682    13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]]
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:17.682   13:49:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:17.682  job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:17.682  fio-3.35
00:12:17.682  Starting 16 threads
00:12:29.885  
00:12:29.885  job_Malloc0: (groupid=0, jobs=16): err= 0: pid=73571: Wed Dec 11 13:49:11 2024
00:12:29.885    read: IOPS=80.0k, BW=312MiB/s (328MB/s)(3124MiB/10001msec)
00:12:29.885      slat (usec): min=2, max=12057, avg=37.64, stdev=245.00
00:12:29.885      clat (usec): min=9, max=15101, avg=298.38, stdev=708.02
00:12:29.885       lat (usec): min=27, max=15115, avg=336.02, stdev=747.55
00:12:29.885      clat percentiles (usec):
00:12:29.885       | 50.000th=[  182], 99.000th=[ 4228], 99.900th=[ 7308], 99.990th=[10421],
00:12:29.885       | 99.999th=[12256]
00:12:29.885    write: IOPS=126k, BW=493MiB/s (517MB/s)(4878MiB/9889msec); 0 zone resets
00:12:29.885      slat (usec): min=5, max=18010, avg=60.31, stdev=318.18
00:12:29.885      clat (usec): min=6, max=18445, avg=370.09, stdev=794.97
00:12:29.885       lat (usec): min=24, max=18494, avg=430.40, stdev=853.29
00:12:29.885      clat percentiles (usec):
00:12:29.885       | 50.000th=[  223], 99.000th=[ 4359], 99.900th=[ 7439], 99.990th=[11469],
00:12:29.885       | 99.999th=[15008]
00:12:29.885     bw (  KiB/s): min=301232, max=792096, per=98.69%, avg=498533.00, stdev=8170.28, samples=304
00:12:29.885     iops        : min=75308, max=198023, avg=124632.74, stdev=2042.56, samples=304
00:12:29.885    lat (usec)   : 10=0.01%, 20=0.01%, 50=0.48%, 100=10.70%, 250=55.70%
00:12:29.885    lat (usec)   : 500=28.55%, 750=1.16%, 1000=0.09%
00:12:29.885    lat (msec)   : 2=0.11%, 4=1.09%, 10=2.09%, 20=0.02%
00:12:29.885    cpu          : usr=57.79%, sys=2.65%, ctx=248610, majf=0, minf=103689
00:12:29.885    IO depths    : 1=11.1%, 2=23.5%, 4=52.2%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0%
00:12:29.885       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:29.885       complete  : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:29.885       issued rwts: total=799821,1248826,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:29.885       latency   : target=0, window=0, percentile=100.00%, depth=8
00:12:29.885  
00:12:29.885  Run status group 0 (all jobs):
00:12:29.885     READ: bw=312MiB/s (328MB/s), 312MiB/s-312MiB/s (328MB/s-328MB/s), io=3124MiB (3276MB), run=10001-10001msec
00:12:29.885    WRITE: bw=493MiB/s (517MB/s), 493MiB/s-493MiB/s (517MB/s-517MB/s), io=4878MiB (5115MB), run=9889-9889msec
00:12:32.475  -----------------------------------------------------
00:12:32.475  Suppressions used:
00:12:32.475    count      bytes template
00:12:32.475       16        140 /usr/src/fio/parse.c
00:12:32.475    12797    1228512 /usr/src/fio/iolog.c
00:12:32.475        1        904 libcrypto.so
00:12:32.475  -----------------------------------------------------
00:12:32.475  
00:12:32.475  
00:12:32.475  real	0m14.717s
00:12:32.475  user	1m37.709s
00:12:32.475  sys	0m5.519s
00:12:32.475   13:49:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:32.475   13:49:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:12:32.475  ************************************
00:12:32.475  END TEST bdev_fio_rw_verify
00:12:32.475  ************************************
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:12:32.475   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:12:32.475    13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:12:32.476    13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "86b4cd58-aa87-4d4b-8938-a81a79f83055"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "86b4cd58-aa87-4d4b-8938-a81a79f83055",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "0d5f80bd-b3c6-583b-9a5b-f1affbfef44c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "0d5f80bd-b3c6-583b-9a5b-f1affbfef44c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "73ef7338-0224-5ff2-acb1-730c965b8c2f"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "73ef7338-0224-5ff2-acb1-730c965b8c2f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "0105f1dc-e965-551b-931a-2216de4a6396"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "0105f1dc-e965-551b-931a-2216de4a6396",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "ad7801da-7068-5127-8028-b634e4bb770b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ad7801da-7068-5127-8028-b634e4bb770b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "9adc5819-a822-5c33-a1e4-d2e0b9ddf0fe"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9adc5819-a822-5c33-a1e4-d2e0b9ddf0fe",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "f2644526-09b0-5238-9824-374520e38da8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "f2644526-09b0-5238-9824-374520e38da8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "0a1185a6-8cb2-5ddb-b587-c951271df723"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "0a1185a6-8cb2-5ddb-b587-c951271df723",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "3f35c61d-9335-57e5-83e9-05bdb1536ca4"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "3f35c61d-9335-57e5-83e9-05bdb1536ca4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "9c3155bf-6960-5f9a-b6c6-61a883c16bbf"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9c3155bf-6960-5f9a-b6c6-61a883c16bbf",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "2d369299-95c0-526f-964f-9d8bbd86db18"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "2d369299-95c0-526f-964f-9d8bbd86db18",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "af47e69e-bf97-5551-baf0-017b04bffbc5"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "af47e69e-bf97-5551-baf0-017b04bffbc5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "5b20c442-f780-4d5b-9b3d-7bf70849a691"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "5b20c442-f780-4d5b-9b3d-7bf70849a691",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "5b20c442-f780-4d5b-9b3d-7bf70849a691",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "4afe5069-2858-4902-bc3a-12757644d491",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "14d35e88-2f34-46e6-aecc-e750810f5474",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "1096aeac-e469-4930-99ef-6919a8959cfa"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "1096aeac-e469-4930-99ef-6919a8959cfa",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "1096aeac-e469-4930-99ef-6919a8959cfa",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "900b2ba4-e2c9-4c0a-b82b-2bd92d6f549c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "8255ac82-04fa-4e7f-8db4-e58da37698e3",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "d35e5097-6105-45e3-860a-e59432cbd0db",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "cffecf5b-fda4-4bb9-9f1b-e622e55e833a",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "10cec670-8c00-470c-993a-11de71d37c1c"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "10cec670-8c00-470c-993a-11de71d37c1c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:12:32.476   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0
00:12:32.476  Malloc1p0
00:12:32.476  Malloc1p1
00:12:32.476  Malloc2p0
00:12:32.476  Malloc2p1
00:12:32.476  Malloc2p2
00:12:32.476  Malloc2p3
00:12:32.476  Malloc2p4
00:12:32.476  Malloc2p5
00:12:32.476  Malloc2p6
00:12:32.476  Malloc2p7
00:12:32.476  TestPT
00:12:32.476  raid0
00:12:32.476  concat0 ]]
00:12:32.476    13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:12:32.478    13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "86b4cd58-aa87-4d4b-8938-a81a79f83055"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "86b4cd58-aa87-4d4b-8938-a81a79f83055",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "0d5f80bd-b3c6-583b-9a5b-f1affbfef44c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "0d5f80bd-b3c6-583b-9a5b-f1affbfef44c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "73ef7338-0224-5ff2-acb1-730c965b8c2f"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "73ef7338-0224-5ff2-acb1-730c965b8c2f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "0105f1dc-e965-551b-931a-2216de4a6396"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "0105f1dc-e965-551b-931a-2216de4a6396",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "ad7801da-7068-5127-8028-b634e4bb770b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ad7801da-7068-5127-8028-b634e4bb770b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "9adc5819-a822-5c33-a1e4-d2e0b9ddf0fe"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9adc5819-a822-5c33-a1e4-d2e0b9ddf0fe",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "f2644526-09b0-5238-9824-374520e38da8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "f2644526-09b0-5238-9824-374520e38da8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "0a1185a6-8cb2-5ddb-b587-c951271df723"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "0a1185a6-8cb2-5ddb-b587-c951271df723",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "3f35c61d-9335-57e5-83e9-05bdb1536ca4"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "3f35c61d-9335-57e5-83e9-05bdb1536ca4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "9c3155bf-6960-5f9a-b6c6-61a883c16bbf"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9c3155bf-6960-5f9a-b6c6-61a883c16bbf",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "2d369299-95c0-526f-964f-9d8bbd86db18"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "2d369299-95c0-526f-964f-9d8bbd86db18",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "af47e69e-bf97-5551-baf0-017b04bffbc5"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "af47e69e-bf97-5551-baf0-017b04bffbc5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "5b20c442-f780-4d5b-9b3d-7bf70849a691"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "5b20c442-f780-4d5b-9b3d-7bf70849a691",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "5b20c442-f780-4d5b-9b3d-7bf70849a691",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "4afe5069-2858-4902-bc3a-12757644d491",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "14d35e88-2f34-46e6-aecc-e750810f5474",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "1096aeac-e469-4930-99ef-6919a8959cfa"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "1096aeac-e469-4930-99ef-6919a8959cfa",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "1096aeac-e469-4930-99ef-6919a8959cfa",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "900b2ba4-e2c9-4c0a-b82b-2bd92d6f549c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "8255ac82-04fa-4e7f-8db4-e58da37698e3",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "4b55a152-ead7-40c7-a7f9-33e96fdbcfa9",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "d35e5097-6105-45e3-860a-e59432cbd0db",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "cffecf5b-fda4-4bb9-9f1b-e622e55e833a",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "10cec670-8c00-470c-993a-11de71d37c1c"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "10cec670-8c00-470c-993a-11de71d37c1c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:32.478   13:49:14 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:12:32.478  ************************************
00:12:32.478  START TEST bdev_fio_trim
00:12:32.478  ************************************
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # shift
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:32.478    13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:32.478    13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # grep libasan
00:12:32.478    13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]]
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # break
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:32.478   13:49:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:32.478  job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.478  job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.478  job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.478  job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:32.479  fio-3.35
00:12:32.479  Starting 14 threads
00:12:44.728  
00:12:44.728  job_Malloc0: (groupid=0, jobs=14): err= 0: pid=73776: Wed Dec 11 13:49:26 2024
00:12:44.728    write: IOPS=128k, BW=500MiB/s (525MB/s)(5004MiB/10003msec); 0 zone resets
00:12:44.728      slat (usec): min=2, max=10489, avg=38.67, stdev=215.08
00:12:44.728      clat (usec): min=24, max=13195, avg=278.10, stdev=583.40
00:12:44.728       lat (usec): min=36, max=13217, avg=316.77, stdev=621.33
00:12:44.728      clat percentiles (usec):
00:12:44.728       | 50.000th=[  186], 99.000th=[ 4178], 99.900th=[ 6325], 99.990th=[ 8225],
00:12:44.728       | 99.999th=[12125]
00:12:44.728     bw (  KiB/s): min=325088, max=761832, per=100.00%, avg=515129.63, stdev=10330.53, samples=266
00:12:44.728     iops        : min=81272, max=190458, avg=128782.16, stdev=2582.63, samples=266
00:12:44.728    trim: IOPS=128k, BW=500MiB/s (525MB/s)(5004MiB/10003msec); 0 zone resets
00:12:44.728      slat (usec): min=3, max=12042, avg=26.03, stdev=175.68
00:12:44.728      clat (usec): min=5, max=13217, avg=305.50, stdev=618.52
00:12:44.728       lat (usec): min=13, max=13232, avg=331.53, stdev=643.02
00:12:44.728      clat percentiles (usec):
00:12:44.728       | 50.000th=[  206], 99.000th=[ 4228], 99.900th=[ 7111], 99.990th=[ 8291],
00:12:44.728       | 99.999th=[12125]
00:12:44.728     bw (  KiB/s): min=325088, max=761768, per=100.00%, avg=515130.05, stdev=10329.98, samples=266
00:12:44.728     iops        : min=81272, max=190442, avg=128782.26, stdev=2582.49, samples=266
00:12:44.728    lat (usec)   : 10=0.05%, 20=0.23%, 50=0.99%, 100=7.36%, 250=63.48%
00:12:44.728    lat (usec)   : 500=25.06%, 750=0.58%, 1000=0.03%
00:12:44.728    lat (msec)   : 2=0.03%, 4=0.64%, 10=1.53%, 20=0.01%
00:12:44.728    cpu          : usr=69.10%, sys=0.49%, ctx=148701, majf=0, minf=15713
00:12:44.728    IO depths    : 1=12.3%, 2=24.6%, 4=50.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:12:44.728       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:44.728       complete  : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:44.728       issued rwts: total=0,1281023,1281026,0 short=0,0,0,0 dropped=0,0,0,0
00:12:44.728       latency   : target=0, window=0, percentile=100.00%, depth=8
00:12:44.728  
00:12:44.728  Run status group 0 (all jobs):
00:12:44.728    WRITE: bw=500MiB/s (525MB/s), 500MiB/s-500MiB/s (525MB/s-525MB/s), io=5004MiB (5247MB), run=10003-10003msec
00:12:44.728     TRIM: bw=500MiB/s (525MB/s), 500MiB/s-500MiB/s (525MB/s-525MB/s), io=5004MiB (5247MB), run=10003-10003msec
00:12:46.631  -----------------------------------------------------
00:12:46.631  Suppressions used:
00:12:46.631    count      bytes template
00:12:46.631       14        129 /usr/src/fio/parse.c
00:12:46.631        1        904 libcrypto.so
00:12:46.631  -----------------------------------------------------
00:12:46.631  
00:12:46.631  ************************************
00:12:46.631  END TEST bdev_fio_trim
00:12:46.631  ************************************
00:12:46.631  
00:12:46.631  real	0m14.518s
00:12:46.631  user	1m42.026s
00:12:46.631  sys	0m1.654s
00:12:46.631   13:49:29 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:46.631   13:49:29 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x
00:12:46.889   13:49:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f
00:12:46.889   13:49:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:46.889   13:49:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd
00:12:46.889  /home/vagrant/spdk_repo/spdk
00:12:46.889   13:49:29 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:12:46.889  
00:12:46.889  real	0m29.525s
00:12:46.889  user	3m19.849s
00:12:46.889  sys	0m7.338s
00:12:46.889   13:49:29 blockdev_general.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:46.889   13:49:29 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:12:46.889  ************************************
00:12:46.889  END TEST bdev_fio
00:12:46.889  ************************************
00:12:46.889   13:49:29 blockdev_general -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:12:46.889   13:49:29 blockdev_general -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:12:46.889   13:49:29 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:12:46.889   13:49:29 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:46.889   13:49:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:46.889  ************************************
00:12:46.889  START TEST bdev_verify
00:12:46.889  ************************************
00:12:46.889   13:49:29 blockdev_general.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:12:46.889  [2024-12-11 13:49:29.579708] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:12:46.889  [2024-12-11 13:49:29.579900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73963 ]
00:12:47.148  [2024-12-11 13:49:29.784560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:47.407  [2024-12-11 13:49:29.981306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:47.407  [2024-12-11 13:49:29.981342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:47.975  [2024-12-11 13:49:30.564375] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:47.975  [2024-12-11 13:49:30.564479] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:47.975  [2024-12-11 13:49:30.572319] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:47.975  [2024-12-11 13:49:30.572371] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:47.975  [2024-12-11 13:49:30.580307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:47.975  [2024-12-11 13:49:30.580354] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:47.975  [2024-12-11 13:49:30.580370] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:48.234  [2024-12-11 13:49:30.843517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:48.234  [2024-12-11 13:49:30.843619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:48.234  [2024-12-11 13:49:30.843661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80
00:12:48.234  [2024-12-11 13:49:30.843676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:48.234  [2024-12-11 13:49:30.847134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:48.234  [2024-12-11 13:49:30.847194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:48.803  Running I/O for 5 seconds...
00:12:53.988      28567.00 IOPS,   111.59 MiB/s
[2024-12-11T13:49:37.019Z]     35264.00 IOPS,   137.75 MiB/s
[2024-12-11T13:49:37.019Z]     34568.33 IOPS,   135.03 MiB/s
00:12:54.247                                                                                                  Latency(us)
00:12:54.247  
[2024-12-11T13:49:37.019Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:54.247  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x1000
00:12:54.247  	 Malloc0             :       5.19    1084.81       4.24       0.00     0.00  117681.43     639.76  297595.86
00:12:54.247  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x1000 length 0x1000
00:12:54.247  	 Malloc0             :       5.07    1162.43       4.54       0.00     0.00  109881.95     628.05  383479.22
00:12:54.247  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x800
00:12:54.247  	 Malloc1p0           :       5.25     561.15       2.19       0.00     0.00  226615.62    7115.34  275625.69
00:12:54.247  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x800 length 0x800
00:12:54.247  	 Malloc1p0           :       5.26     608.40       2.38       0.00     0.00  209279.07    2855.50  203723.34
00:12:54.247  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x800
00:12:54.247  	 Malloc1p1           :       5.25     560.91       2.19       0.00     0.00  225658.12    5554.96  265639.25
00:12:54.247  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x800 length 0x800
00:12:54.247  	 Malloc1p1           :       5.26     607.92       2.37       0.00     0.00  208892.38    4837.18  200727.41
00:12:54.247  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p0           :       5.25     560.66       2.19       0.00     0.00  224917.45    7302.58  251658.24
00:12:54.247  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p0           :       5.27     607.47       2.37       0.00     0.00  208440.59    2917.91  195734.19
00:12:54.247  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p1           :       5.25     560.41       2.19       0.00     0.00  223960.32    6241.52  240673.16
00:12:54.247  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p1           :       5.27     607.00       2.37       0.00     0.00  208117.84    2855.50  199728.76
00:12:54.247  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p2           :       5.26     560.18       2.19       0.00     0.00  223089.98    5929.45  228689.43
00:12:54.247  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p2           :       5.28     606.53       2.37       0.00     0.00  207798.44    5118.05  192738.26
00:12:54.247  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p3           :       5.26     559.97       2.19       0.00     0.00  222257.75    7271.38  215707.06
00:12:54.247  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p3           :       5.28     606.10       2.37       0.00     0.00  207335.72    2902.31  189742.32
00:12:54.247  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p4           :       5.26     559.70       2.19       0.00     0.00  221334.29    4431.48  207717.91
00:12:54.247  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p4           :       5.28     605.85       2.37       0.00     0.00  206860.62    3978.97  184749.10
00:12:54.247  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p5           :       5.26     559.26       2.18       0.00     0.00  220723.80    4088.20  199728.76
00:12:54.247  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p5           :       5.28     605.60       2.37       0.00     0.00  206399.13    3386.03  180754.53
00:12:54.247  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p6           :       5.27     558.84       2.18       0.00     0.00  220212.00    4244.24  195734.19
00:12:54.247  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p6           :       5.29     605.35       2.36       0.00     0.00  205982.78    2793.08  180754.53
00:12:54.247  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x200
00:12:54.247  	 Malloc2p7           :       5.27     558.40       2.18       0.00     0.00  219834.64    2621.44  194735.54
00:12:54.247  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x200 length 0x200
00:12:54.247  	 Malloc2p7           :       5.29     605.11       2.36       0.00     0.00  205526.80    5024.43  175761.31
00:12:54.247  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x1000
00:12:54.247  	 TestPT              :       5.28     557.98       2.18       0.00     0.00  219540.65    4213.03  191739.61
00:12:54.247  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x1000 length 0x1000
00:12:54.247  	 TestPT              :       5.29     585.70       2.29       0.00     0.00  210215.49   13419.28  174762.67
00:12:54.247  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x2000
00:12:54.247  	 raid0               :       5.32     577.96       2.26       0.00     0.00  211483.05    4056.99  196732.83
00:12:54.247  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x2000 length 0x2000
00:12:54.247  	 raid0               :       5.29     604.73       2.36       0.00     0.00  204447.28    3105.16  162778.94
00:12:54.247  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x2000
00:12:54.247  	 concat0             :       5.32     577.76       2.26       0.00     0.00  210928.86    3947.76  200727.41
00:12:54.247  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x2000 length 0x2000
00:12:54.247  	 concat0             :       5.29     604.49       2.36       0.00     0.00  203947.18    3245.59  160781.65
00:12:54.247  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x1000
00:12:54.247  	 raid1               :       5.32     577.52       2.26       0.00     0.00  210342.27    5211.67  204721.98
00:12:54.247  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x1000 length 0x1000
00:12:54.247  	 raid1               :       5.30     604.24       2.36       0.00     0.00  203544.12    3932.16  163777.58
00:12:54.247  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x0 length 0x4e2
00:12:54.247  	 AIO0                :       5.32     577.14       2.25       0.00     0.00  209696.02    3479.65  211712.49
00:12:54.247  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:54.247  	 Verification LBA range: start 0x4e2 length 0x4e2
00:12:54.247  	 AIO0                :       5.30     604.02       2.36       0.00     0.00  203064.79     998.64  171766.74
00:12:54.247  
[2024-12-11T13:49:37.019Z]  ===================================================================================================================
00:12:54.247  
[2024-12-11T13:49:37.019Z]  Total                       :              19783.60      77.28       0.00     0.00  201779.12     628.05  383479.22
00:12:57.599  
00:12:57.599  real	0m10.158s
00:12:57.599  user	0m18.457s
00:12:57.599  sys	0m0.843s
00:12:57.599   13:49:39 blockdev_general.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:57.599   13:49:39 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:12:57.599  ************************************
00:12:57.599  END TEST bdev_verify
00:12:57.599  ************************************
00:12:57.599   13:49:39 blockdev_general -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:12:57.600   13:49:39 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:12:57.600   13:49:39 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:57.600   13:49:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:57.600  ************************************
00:12:57.600  START TEST bdev_verify_big_io
00:12:57.600  ************************************
00:12:57.600   13:49:39 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:12:57.600  [2024-12-11 13:49:39.799826] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:12:57.600  [2024-12-11 13:49:39.800029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74089 ]
00:12:57.600  [2024-12-11 13:49:40.010278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:57.600  [2024-12-11 13:49:40.196184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:57.600  [2024-12-11 13:49:40.196202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:58.168  [2024-12-11 13:49:40.711159] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:58.168  [2024-12-11 13:49:40.711230] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:58.168  [2024-12-11 13:49:40.719141] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:58.168  [2024-12-11 13:49:40.719194] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:58.168  [2024-12-11 13:49:40.727126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:58.168  [2024-12-11 13:49:40.727171] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:58.168  [2024-12-11 13:49:40.727186] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:58.427  [2024-12-11 13:49:40.958690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:58.427  [2024-12-11 13:49:40.958769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:58.427  [2024-12-11 13:49:40.958796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80
00:12:58.427  [2024-12-11 13:49:40.958811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:58.427  [2024-12-11 13:49:40.961510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:58.427  [2024-12-11 13:49:40.961555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:58.686  [2024-12-11 13:49:41.352280] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.356311] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.360335] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.364057] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.368078] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.371603] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.375537] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.379144] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.383129] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.386653] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.390670] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.394160] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.398052] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.401873] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.405332] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:12:58.686  [2024-12-11 13:49:41.409417] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:12:58.945  [2024-12-11 13:49:41.505587] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:12:58.945  [2024-12-11 13:49:41.513780] bdevperf.c:1965:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:12:58.945  Running I/O for 5 seconds...
00:13:05.511       4409.00 IOPS,   275.56 MiB/s
00:13:05.511                                                                                                  Latency(us)
00:13:05.511  
[2024-12-11T13:49:48.283Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:05.511  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x100
00:13:05.511  	 Malloc0             :       5.50     232.58      14.54       0.00     0.00  541741.71     682.67 1637775.85
00:13:05.511  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x100 length 0x100
00:13:05.511  	 Malloc0             :       5.74     245.44      15.34       0.00     0.00  513460.67     690.47 1837504.61
00:13:05.511  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x80
00:13:05.511  	 Malloc1p0           :       5.88     123.84       7.74       0.00     0.00  950375.12    2496.61 1885439.51
00:13:05.511  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x80 length 0x80
00:13:05.511  	 Malloc1p0           :       6.37      50.25       3.14       0.00     0.00 2341719.81    1349.73 3659030.92
00:13:05.511  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x80
00:13:05.511  	 Malloc1p1           :       6.22      48.87       3.05       0.00     0.00 2364760.71    1427.75 3946640.34
00:13:05.511  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x80 length 0x80
00:13:05.511  	 Malloc1p1           :       6.37      50.24       3.14       0.00     0.00 2270122.87    1341.93 3515226.21
00:13:05.511  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p0           :       5.88      35.37       2.21       0.00     0.00  819870.92     600.75 1374133.88
00:13:05.511  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p0           :       5.89      38.02       2.38       0.00     0.00  760232.36     655.36 1166415.97
00:13:05.511  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p1           :       5.88      35.36       2.21       0.00     0.00  814788.41     581.24 1358155.58
00:13:05.511  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p1           :       5.89      38.01       2.38       0.00     0.00  754805.80     674.86 1142448.52
00:13:05.511  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p2           :       5.88      35.35       2.21       0.00     0.00  809391.35     639.76 1342177.28
00:13:05.511  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p2           :       5.89      38.01       2.38       0.00     0.00  750470.49     628.05 1126470.22
00:13:05.511  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p3           :       5.88      35.35       2.21       0.00     0.00  804712.34     624.15 1326198.98
00:13:05.511  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p3           :       5.97      40.20       2.51       0.00     0.00  710543.08     659.26 1110491.92
00:13:05.511  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p4           :       5.89      35.34       2.21       0.00     0.00  799905.30     651.46 1310220.68
00:13:05.511  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p4           :       5.97      40.19       2.51       0.00     0.00  706378.09     635.86 1094513.62
00:13:05.511  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p5           :       5.94      37.72       2.36       0.00     0.00  751697.01     600.75 1294242.38
00:13:05.511  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p5           :       5.97      40.18       2.51       0.00     0.00  702101.02     612.45 1078535.31
00:13:05.511  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p6           :       5.94      37.71       2.36       0.00     0.00  746961.28     600.75 1278264.08
00:13:05.511  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p6           :       5.97      40.17       2.51       0.00     0.00  697246.09     635.86 1062557.01
00:13:05.511  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x20
00:13:05.511  	 Malloc2p7           :       5.94      37.70       2.36       0.00     0.00  742285.75     592.94 1262285.78
00:13:05.511  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x20 length 0x20
00:13:05.511  	 Malloc2p7           :       5.98      40.17       2.51       0.00     0.00  693156.81     577.34 1038589.56
00:13:05.511  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x100
00:13:05.511  	 TestPT              :       6.22      49.17       3.07       0.00     0.00 2184377.91   51679.82 3403378.10
00:13:05.511  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x100 length 0x100
00:13:05.511  	 TestPT              :       6.38      47.66       2.98       0.00     0.00 2253826.16   58171.00 3099790.38
00:13:05.511  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x200
00:13:05.511  	 raid0               :       6.36      52.87       3.30       0.00     0.00 1979020.54    1419.95 3563161.11
00:13:05.511  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x200 length 0x200
00:13:05.511  	 raid0               :       6.38      55.14       3.45       0.00     0.00 1917249.15    1443.35 3099790.38
00:13:05.511  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x200
00:13:05.511  	 concat0             :       6.36      60.41       3.78       0.00     0.00 1715560.62    1412.14 3451313.01
00:13:05.511  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x200 length 0x200
00:13:05.511  	 concat0             :       6.38      60.18       3.76       0.00     0.00 1733515.57    1412.14 2987942.28
00:13:05.511  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x100
00:13:05.511  	 raid1               :       6.36      85.54       5.35       0.00     0.00 1207871.35    1763.23 3323486.60
00:13:05.511  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x100 length 0x100
00:13:05.511  	 raid1               :       6.38      92.07       5.75       0.00     0.00 1116489.90    1732.02 2860115.87
00:13:05.511  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x0 length 0x4e
00:13:05.511  	 AIO0                :       6.36      67.28       4.20       0.00     0.00  915533.75    1677.41 1989298.47
00:13:05.511  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536)
00:13:05.511  	 Verification LBA range: start 0x4e length 0x4e
00:13:05.511  	 AIO0                :       6.38      69.56       4.35       0.00     0.00  882260.50    1490.16 1661743.30
00:13:05.511  
[2024-12-11T13:49:48.283Z]  ===================================================================================================================
00:13:05.511  
[2024-12-11T13:49:48.283Z]  Total                       :               1995.93     124.75       0.00     0.00 1091371.58     577.34 3946640.34
00:13:08.045  
00:13:08.045  real	0m10.962s
00:13:08.045  user	0m20.338s
00:13:08.045  sys	0m0.629s
00:13:08.045   13:49:50 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:08.045   13:49:50 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:13:08.045  ************************************
00:13:08.045  END TEST bdev_verify_big_io
00:13:08.045  ************************************
00:13:08.045   13:49:50 blockdev_general -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:08.045   13:49:50 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:13:08.045   13:49:50 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:08.045   13:49:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:13:08.045  ************************************
00:13:08.045  START TEST bdev_write_zeroes
00:13:08.045  ************************************
00:13:08.045   13:49:50 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:08.045  [2024-12-11 13:49:50.798577] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:08.045  [2024-12-11 13:49:50.798719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74215 ]
00:13:08.305  [2024-12-11 13:49:50.971914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:08.565  [2024-12-11 13:49:51.096503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:08.823  [2024-12-11 13:49:51.527306] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:13:08.823  [2024-12-11 13:49:51.527378] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:13:08.823  [2024-12-11 13:49:51.535271] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:13:08.823  [2024-12-11 13:49:51.535316] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:13:08.823  [2024-12-11 13:49:51.543265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:13:08.823  [2024-12-11 13:49:51.543307] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:13:08.823  [2024-12-11 13:49:51.543324] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:13:09.082  [2024-12-11 13:49:51.752766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:13:09.082  [2024-12-11 13:49:51.752849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:09.082  [2024-12-11 13:49:51.752872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80
00:13:09.082  [2024-12-11 13:49:51.752886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:09.082  [2024-12-11 13:49:51.755721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:09.082  [2024-12-11 13:49:51.755766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:13:09.649  Running I/O for 1 seconds...
00:13:10.587      96244.00 IOPS,   375.95 MiB/s
00:13:10.587                                                                                                  Latency(us)
00:13:10.587  
[2024-12-11T13:49:53.359Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:10.587  Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc0             :       1.05    5970.64      23.32       0.00     0.00   21430.75     538.33   40944.40
00:13:10.587  Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc1p0           :       1.05    5964.41      23.30       0.00     0.00   21419.96     725.58   39945.75
00:13:10.587  Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc1p1           :       1.05    5958.24      23.27       0.00     0.00   21402.30     717.78   38447.79
00:13:10.587  Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p0           :       1.05    5952.19      23.25       0.00     0.00   21392.03     713.87   37199.48
00:13:10.587  Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p1           :       1.05    5946.22      23.23       0.00     0.00   21377.35     706.07   35701.52
00:13:10.587  Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p2           :       1.06    5940.22      23.20       0.00     0.00   21362.71     737.28   34453.21
00:13:10.587  Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p3           :       1.06    5934.16      23.18       0.00     0.00   21348.69     721.68   32955.25
00:13:10.587  Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p4           :       1.06    5928.14      23.16       0.00     0.00   21334.63     717.78   31706.94
00:13:10.587  Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p5           :       1.06    5922.28      23.13       0.00     0.00   21315.48     706.07   32955.25
00:13:10.587  Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p6           :       1.06    5916.28      23.11       0.00     0.00   21303.06     717.78   34203.55
00:13:10.587  Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 Malloc2p7           :       1.06    5910.26      23.09       0.00     0.00   21286.08     721.68   35701.52
00:13:10.587  Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 TestPT              :       1.06    5904.29      23.06       0.00     0.00   21268.97     752.88   36949.82
00:13:10.587  Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 raid0               :       1.06    5897.23      23.04       0.00     0.00   21255.51    1326.32   38447.79
00:13:10.587  Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 concat0             :       1.06    5890.30      23.01       0.00     0.00   21214.09    1334.13   39696.09
00:13:10.587  Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 raid1               :       1.07    5881.59      22.97       0.00     0.00   21171.15    2168.93   41194.06
00:13:10.587  Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:10.587  	 AIO0                :       1.07    5865.24      22.91       0.00     0.00   21145.05    1404.34   42442.36
00:13:10.587  
[2024-12-11T13:49:53.359Z]  ===================================================================================================================
00:13:10.587  
[2024-12-11T13:49:53.359Z]  Total                       :              94781.69     370.24       0.00     0.00   21314.26     538.33   42442.36
00:13:13.120  
00:13:13.120  real	0m4.777s
00:13:13.120  user	0m4.162s
00:13:13.120  sys	0m0.462s
00:13:13.120   13:49:55 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:13.120   13:49:55 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:13:13.120  ************************************
00:13:13.120  END TEST bdev_write_zeroes
00:13:13.120  ************************************
00:13:13.120   13:49:55 blockdev_general -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:13.120   13:49:55 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:13:13.120   13:49:55 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:13.120   13:49:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:13:13.120  ************************************
00:13:13.120  START TEST bdev_json_nonenclosed
00:13:13.120  ************************************
00:13:13.120   13:49:55 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:13.120  [2024-12-11 13:49:55.662265] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:13.120  [2024-12-11 13:49:55.662455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74290 ]
00:13:13.120  [2024-12-11 13:49:55.864558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:13.379  [2024-12-11 13:49:56.040887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:13.379  [2024-12-11 13:49:56.040978] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:13:13.379  [2024-12-11 13:49:56.040999] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:13:13.379  [2024-12-11 13:49:56.041012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:13.637  
00:13:13.637  real	0m0.725s
00:13:13.637  user	0m0.465s
00:13:13.637  sys	0m0.159s
00:13:13.637   13:49:56 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:13.637   13:49:56 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:13:13.637  ************************************
00:13:13.637  END TEST bdev_json_nonenclosed
00:13:13.637  ************************************
00:13:13.637   13:49:56 blockdev_general -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:13.637   13:49:56 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:13:13.637   13:49:56 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:13.637   13:49:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:13:13.637  ************************************
00:13:13.637  START TEST bdev_json_nonarray
00:13:13.637  ************************************
00:13:13.637   13:49:56 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:13.896  [2024-12-11 13:49:56.448507] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:13.896  [2024-12-11 13:49:56.448701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74319 ]
00:13:13.896  [2024-12-11 13:49:56.649875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:14.154  [2024-12-11 13:49:56.827238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:14.154  [2024-12-11 13:49:56.827336] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:13:14.154  [2024-12-11 13:49:56.827358] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:13:14.154  [2024-12-11 13:49:56.827372] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:14.412  
00:13:14.412  real	0m0.725s
00:13:14.412  user	0m0.491s
00:13:14.412  sys	0m0.134s
00:13:14.412   13:49:57 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:14.412   13:49:57 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:13:14.412  ************************************
00:13:14.412  END TEST bdev_json_nonarray
00:13:14.412  ************************************
00:13:14.412   13:49:57 blockdev_general -- bdev/blockdev.sh@824 -- # [[ bdev == bdev ]]
00:13:14.412   13:49:57 blockdev_general -- bdev/blockdev.sh@825 -- # run_test bdev_qos qos_test_suite ''
00:13:14.412   13:49:57 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:14.412   13:49:57 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:14.412   13:49:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:13:14.412  ************************************
00:13:14.412  START TEST bdev_qos
00:13:14.412  ************************************
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1129 -- # qos_test_suite ''
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=74347
00:13:14.412  Process qos testing pid: 74347
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 74347'
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 74347
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # '[' -z 74347 ']'
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 ''
00:13:14.412  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:14.412   13:49:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:14.671  [2024-12-11 13:49:57.213909] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:14.671  [2024-12-11 13:49:57.214031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74347 ]
00:13:14.671  [2024-12-11 13:49:57.393556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:14.929  [2024-12-11 13:49:57.591747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:13:15.495   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:15.495   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@868 -- # return 0
00:13:15.495   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512
00:13:15.495   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:15.495   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:15.754  Malloc_0
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_0
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:15.754  [
00:13:15.754  {
00:13:15.754  "name": "Malloc_0",
00:13:15.754  "aliases": [
00:13:15.754  "597ef5bb-256a-4074-b0ad-3b7721e810fc"
00:13:15.754  ],
00:13:15.754  "product_name": "Malloc disk",
00:13:15.754  "block_size": 512,
00:13:15.754  "num_blocks": 262144,
00:13:15.754  "uuid": "597ef5bb-256a-4074-b0ad-3b7721e810fc",
00:13:15.754  "assigned_rate_limits": {
00:13:15.754  "rw_ios_per_sec": 0,
00:13:15.754  "rw_mbytes_per_sec": 0,
00:13:15.754  "r_mbytes_per_sec": 0,
00:13:15.754  "w_mbytes_per_sec": 0
00:13:15.754  },
00:13:15.754  "claimed": false,
00:13:15.754  "zoned": false,
00:13:15.754  "supported_io_types": {
00:13:15.754  "read": true,
00:13:15.754  "write": true,
00:13:15.754  "unmap": true,
00:13:15.754  "flush": true,
00:13:15.754  "reset": true,
00:13:15.754  "nvme_admin": false,
00:13:15.754  "nvme_io": false,
00:13:15.754  "nvme_io_md": false,
00:13:15.754  "write_zeroes": true,
00:13:15.754  "zcopy": true,
00:13:15.754  "get_zone_info": false,
00:13:15.754  "zone_management": false,
00:13:15.754  "zone_append": false,
00:13:15.754  "compare": false,
00:13:15.754  "compare_and_write": false,
00:13:15.754  "abort": true,
00:13:15.754  "seek_hole": false,
00:13:15.754  "seek_data": false,
00:13:15.754  "copy": true,
00:13:15.754  "nvme_iov_md": false
00:13:15.754  },
00:13:15.754  "memory_domains": [
00:13:15.754  {
00:13:15.754  "dma_device_id": "system",
00:13:15.754  "dma_device_type": 1
00:13:15.754  },
00:13:15.754  {
00:13:15.754  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:15.754  "dma_device_type": 2
00:13:15.754  }
00:13:15.754  ],
00:13:15.754  "driver_specific": {}
00:13:15.754  }
00:13:15.754  ]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:15.754  Null_1
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Null_1
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:15.754   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:15.755  [
00:13:15.755  {
00:13:15.755  "name": "Null_1",
00:13:15.755  "aliases": [
00:13:15.755  "377a317b-aa5a-4acf-b2d8-211f4765ea7d"
00:13:15.755  ],
00:13:15.755  "product_name": "Null disk",
00:13:15.755  "block_size": 512,
00:13:15.755  "num_blocks": 262144,
00:13:15.755  "uuid": "377a317b-aa5a-4acf-b2d8-211f4765ea7d",
00:13:15.755  "assigned_rate_limits": {
00:13:15.755  "rw_ios_per_sec": 0,
00:13:15.755  "rw_mbytes_per_sec": 0,
00:13:15.755  "r_mbytes_per_sec": 0,
00:13:15.755  "w_mbytes_per_sec": 0
00:13:15.755  },
00:13:15.755  "claimed": false,
00:13:15.755  "zoned": false,
00:13:15.755  "supported_io_types": {
00:13:15.755  "read": true,
00:13:15.755  "write": true,
00:13:15.755  "unmap": false,
00:13:15.755  "flush": false,
00:13:15.755  "reset": true,
00:13:15.755  "nvme_admin": false,
00:13:15.755  "nvme_io": false,
00:13:15.755  "nvme_io_md": false,
00:13:15.755  "write_zeroes": true,
00:13:15.755  "zcopy": false,
00:13:15.755  "get_zone_info": false,
00:13:15.755  "zone_management": false,
00:13:15.755  "zone_append": false,
00:13:15.755  "compare": false,
00:13:15.755  "compare_and_write": false,
00:13:15.755  "abort": true,
00:13:15.755  "seek_hole": false,
00:13:15.755  "seek_data": false,
00:13:15.755  "copy": false,
00:13:15.755  "nvme_iov_md": false
00:13:15.755  },
00:13:15.755  "driver_specific": {}
00:13:15.755  }
00:13:15.755  ]
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0
00:13:15.755   13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:15.755    13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0
00:13:15.755    13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:13:15.755    13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:13:15.755    13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:13:15.755     13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:13:15.755     13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:13:15.755     13:49:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:13:15.755  Running I/O for 60 seconds...
00:13:18.067     148992.00 IOPS,   582.00 MiB/s
[2024-12-11T13:50:01.774Z]    149760.00 IOPS,   585.00 MiB/s
[2024-12-11T13:50:02.710Z]    149333.33 IOPS,   583.33 MiB/s
[2024-12-11T13:50:03.647Z]    149632.00 IOPS,   584.50 MiB/s
[2024-12-11T13:50:03.647Z]    150835.20 IOPS,   589.20 MiB/s
[2024-12-11T13:50:03.647Z]   13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  75799.43  303197.73  0.00       0.00       307200.00  0.00     0.00   '
00:13:20.875    13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:13:20.875     13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:13:20.875    13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=75799.43
00:13:20.875    13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 75799
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=75799
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=18000
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 18000 -gt 1000 ']'
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 18000 Malloc_0
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 18000 IOPS Malloc_0
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:20.875   13:50:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:20.875  ************************************
00:13:20.875  START TEST bdev_qos_iops
00:13:20.875  ************************************
00:13:20.875   13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1129 -- # run_qos_test 18000 IOPS Malloc_0
00:13:20.875   13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=18000
00:13:20.875   13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0
00:13:20.875    13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0
00:13:20.875    13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:13:20.875    13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:13:20.875    13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result
00:13:20.875     13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:13:20.875     13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:13:20.875     13:50:03 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1
00:13:22.746     135797.33 IOPS,   530.46 MiB/s
[2024-12-11T13:50:06.894Z]    123360.29 IOPS,   481.88 MiB/s
[2024-12-11T13:50:07.830Z]    114098.75 IOPS,   445.70 MiB/s
[2024-12-11T13:50:08.796Z]    106765.11 IOPS,   417.05 MiB/s
[2024-12-11T13:50:09.055Z]    100501.60 IOPS,   392.58 MiB/s
[2024-12-11T13:50:09.055Z]   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  17965.11  71860.45   0.00       0.00       72648.00   0.00     0.00   '
00:13:26.283    13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:13:26.283     13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:13:26.283    13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=17965.11
00:13:26.283    13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 17965
00:13:26.283   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=17965
00:13:26.283   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']'
00:13:26.283  ************************************
00:13:26.283  END TEST bdev_qos_iops
00:13:26.283  ************************************
00:13:26.283   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=16200
00:13:26.283   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=19800
00:13:26.283   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 17965 -lt 16200 ']'
00:13:26.283   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 17965 -gt 19800 ']'
00:13:26.283  
00:13:26.283  real	0m5.233s
00:13:26.283  user	0m0.139s
00:13:26.284  sys	0m0.048s
00:13:26.284   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:26.284   13:50:08 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x
00:13:26.284    13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1
00:13:26.284    13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:13:26.284    13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:13:26.284    13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:13:26.284     13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:13:26.284     13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:13:26.284     13:50:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1
00:13:27.786      95576.18 IOPS,   373.34 MiB/s
[2024-12-11T13:50:11.493Z]     91506.83 IOPS,   357.45 MiB/s
[2024-12-11T13:50:12.868Z]     87861.08 IOPS,   343.21 MiB/s
[2024-12-11T13:50:13.805Z]     85013.00 IOPS,   332.08 MiB/s
[2024-12-11T13:50:14.373Z]     82584.80 IOPS,   322.60 MiB/s
[2024-12-11T13:50:14.373Z]   13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    29429.48  117717.90  0.00       0.00       119808.00  0.00     0.00   '
00:13:31.601    13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:13:31.601    13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:31.601     13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:13:31.601    13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=119808.00
00:13:31.601    13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 119808
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=119808
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=11
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 11 -lt 2 ']'
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:31.601   13:50:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:31.601  ************************************
00:13:31.601  START TEST bdev_qos_bw
00:13:31.601  ************************************
00:13:31.601   13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1129 -- # run_qos_test 11 BANDWIDTH Null_1
00:13:31.601   13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=11
00:13:31.601   13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:13:31.601    13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1
00:13:31.601    13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:13:31.601    13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:13:31.601    13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:13:31.601     13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:13:31.601     13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1
00:13:31.601     13:50:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1
00:13:32.795      79874.38 IOPS,   312.01 MiB/s
[2024-12-11T13:50:16.943Z]     76408.76 IOPS,   298.47 MiB/s
[2024-12-11T13:50:17.881Z]     73317.33 IOPS,   286.40 MiB/s
[2024-12-11T13:50:18.814Z]     70559.11 IOPS,   275.62 MiB/s
[2024-12-11T13:50:19.748Z]     68074.90 IOPS,   265.92 MiB/s
[2024-12-11T13:50:19.748Z]   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    2813.87   11255.50   0.00       0.00       11544.00  0.00     0.00   '
00:13:36.976    13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:13:36.976    13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:36.976     13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:13:36.976    13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=11544.00
00:13:36.976    13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 11544
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=11544
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=11264
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=10137
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=12390
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 11544 -lt 10137 ']'
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 11544 -gt 12390 ']'
00:13:36.976  
00:13:36.976  real	0m5.311s
00:13:36.976  user	0m0.183s
00:13:36.976  sys	0m0.043s
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x
00:13:36.976  ************************************
00:13:36.976  END TEST bdev_qos_bw
00:13:36.976  ************************************
00:13:36.976      65833.19 IOPS,   257.16 MiB/s
[2024-12-11T13:50:19.748Z]  13:50:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:36.976   13:50:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:36.976  ************************************
00:13:36.976  START TEST bdev_qos_ro_bw
00:13:36.976  ************************************
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1129 -- # run_qos_test 2 BANDWIDTH Malloc_0
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2
00:13:36.976   13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:13:36.976    13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0
00:13:36.977    13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:13:36.977    13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:13:36.977    13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:13:36.977     13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:13:36.977     13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:13:36.977     13:50:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1
00:13:38.850      63028.23 IOPS,   246.20 MiB/s
[2024-12-11T13:50:22.558Z]     60432.52 IOPS,   236.06 MiB/s
[2024-12-11T13:50:23.933Z]     58053.17 IOPS,   226.77 MiB/s
[2024-12-11T13:50:24.867Z]     55864.16 IOPS,   218.22 MiB/s
[2024-12-11T13:50:24.867Z]     53843.54 IOPS,   210.33 MiB/s
[2024-12-11T13:50:24.867Z]   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  511.89   2047.56    0.00       0.00       2064.00   0.00     0.00   '
00:13:42.095    13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:13:42.095    13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:42.095     13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:13:42.095    13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00
00:13:42.095    13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064
00:13:42.095  ************************************
00:13:42.095  END TEST bdev_qos_ro_bw
00:13:42.095  ************************************
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']'
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']'
00:13:42.095  
00:13:42.095  real	0m5.235s
00:13:42.095  user	0m0.166s
00:13:42.095  sys	0m0.044s
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:42.095   13:50:24 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x
00:13:42.353   13:50:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0
00:13:42.353   13:50:24 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:42.353   13:50:24 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:42.921      51969.19 IOPS,   203.00 MiB/s
00:13:42.921                                                                                                  Latency(us)
00:13:42.921  
[2024-12-11T13:50:25.693Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:42.921  Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:42.921  	 Malloc_0            :      26.92   25123.97      98.14       0.00     0.00   10093.12    1958.28  505313.77
00:13:42.921  Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:42.921  	 Null_1              :      27.16   26765.40     104.55       0.00     0.00    9543.61     686.57  222697.57
00:13:42.921  
[2024-12-11T13:50:25.693Z]  ===================================================================================================================
00:13:42.921  
[2024-12-11T13:50:25.693Z]  Total                       :              51889.37     202.69       0.00     0.00    9808.45     686.57  505313.77
00:13:42.921  {
00:13:42.921    "results": [
00:13:42.921      {
00:13:42.921        "job": "Malloc_0",
00:13:42.921        "core_mask": "0x2",
00:13:42.921        "workload": "randread",
00:13:42.921        "status": "finished",
00:13:42.921        "queue_depth": 256,
00:13:42.921        "io_size": 4096,
00:13:42.921        "runtime": 26.922976,
00:13:42.921        "iops": 25123.968464704645,
00:13:42.921        "mibps": 98.14050181525252,
00:13:42.921        "io_failed": 0,
00:13:42.921        "io_timeout": 0,
00:13:42.921        "avg_latency_us": 10093.12494900403,
00:13:42.921        "min_latency_us": 1958.2780952380951,
00:13:42.921        "max_latency_us": 505313.7676190476
00:13:42.921      },
00:13:42.921      {
00:13:42.921        "job": "Null_1",
00:13:42.921        "core_mask": "0x2",
00:13:42.921        "workload": "randread",
00:13:42.921        "status": "finished",
00:13:42.921        "queue_depth": 256,
00:13:42.921        "io_size": 4096,
00:13:42.921        "runtime": 27.164172,
00:13:42.921        "iops": 26765.402604577823,
00:13:42.921        "mibps": 104.55235392413212,
00:13:42.921        "io_failed": 0,
00:13:42.921        "io_timeout": 0,
00:13:42.921        "avg_latency_us": 9543.60663649165,
00:13:42.921        "min_latency_us": 686.567619047619,
00:13:42.921        "max_latency_us": 222697.56952380954
00:13:42.921      }
00:13:42.921    ],
00:13:42.921    "core_count": 1
00:13:42.921  }
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 74347
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' -z 74347 ']'
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # kill -0 74347
00:13:42.921    13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # uname
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:42.921    13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74347
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:13:42.921  killing process with pid 74347
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74347'
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # kill 74347
00:13:42.921  Received shutdown signal, test time was about 27.216382 seconds
00:13:42.921  
00:13:42.921                                                                                                  Latency(us)
00:13:42.921  
[2024-12-11T13:50:25.693Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:42.921  
[2024-12-11T13:50:25.693Z]  ===================================================================================================================
00:13:42.921  
[2024-12-11T13:50:25.693Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:42.921   13:50:25 blockdev_general.bdev_qos -- common/autotest_common.sh@978 -- # wait 74347
00:13:44.891   13:50:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT
00:13:44.891  
00:13:44.891  real	0m30.039s
00:13:44.891  user	0m30.829s
00:13:44.891  sys	0m0.883s
00:13:44.891   13:50:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:44.891   13:50:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:13:44.891  ************************************
00:13:44.891  END TEST bdev_qos
00:13:44.891  ************************************
00:13:44.891   13:50:27 blockdev_general -- bdev/blockdev.sh@826 -- # run_test bdev_qd_sampling qd_sampling_test_suite ''
00:13:44.891   13:50:27 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:44.891   13:50:27 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:44.891   13:50:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:13:44.891  ************************************
00:13:44.892  START TEST bdev_qd_sampling
00:13:44.892  ************************************
00:13:44.892  Process bdev QD sampling period testing pid: 74773
00:13:44.892  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1129 -- # qd_sampling_test_suite ''
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=74773
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 74773'
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 74773
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # '[' -z 74773 ']'
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C ''
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:44.892   13:50:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:44.892  [2024-12-11 13:50:27.345648] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:44.892  [2024-12-11 13:50:27.346167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74773 ]
00:13:44.892  [2024-12-11 13:50:27.557190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:45.151  [2024-12-11 13:50:27.757818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:45.151  [2024-12-11 13:50:27.757863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@868 -- # return 0
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:45.719  Malloc_QD
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_QD
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # local i
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:45.719  [
00:13:45.719  {
00:13:45.719  "name": "Malloc_QD",
00:13:45.719  "aliases": [
00:13:45.719  "d4530780-9b73-4754-a524-07f441c118e3"
00:13:45.719  ],
00:13:45.719  "product_name": "Malloc disk",
00:13:45.719  "block_size": 512,
00:13:45.719  "num_blocks": 262144,
00:13:45.719  "uuid": "d4530780-9b73-4754-a524-07f441c118e3",
00:13:45.719  "assigned_rate_limits": {
00:13:45.719  "rw_ios_per_sec": 0,
00:13:45.719  "rw_mbytes_per_sec": 0,
00:13:45.719  "r_mbytes_per_sec": 0,
00:13:45.719  "w_mbytes_per_sec": 0
00:13:45.719  },
00:13:45.719  "claimed": false,
00:13:45.719  "zoned": false,
00:13:45.719  "supported_io_types": {
00:13:45.719  "read": true,
00:13:45.719  "write": true,
00:13:45.719  "unmap": true,
00:13:45.719  "flush": true,
00:13:45.719  "reset": true,
00:13:45.719  "nvme_admin": false,
00:13:45.719  "nvme_io": false,
00:13:45.719  "nvme_io_md": false,
00:13:45.719  "write_zeroes": true,
00:13:45.719  "zcopy": true,
00:13:45.719  "get_zone_info": false,
00:13:45.719  "zone_management": false,
00:13:45.719  "zone_append": false,
00:13:45.719  "compare": false,
00:13:45.719  "compare_and_write": false,
00:13:45.719  "abort": true,
00:13:45.719  "seek_hole": false,
00:13:45.719  "seek_data": false,
00:13:45.719  "copy": true,
00:13:45.719  "nvme_iov_md": false
00:13:45.719  },
00:13:45.719  "memory_domains": [
00:13:45.719  {
00:13:45.719  "dma_device_id": "system",
00:13:45.719  "dma_device_type": 1
00:13:45.719  },
00:13:45.719  {
00:13:45.719  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:45.719  "dma_device_type": 2
00:13:45.719  }
00:13:45.719  ],
00:13:45.719  "driver_specific": {}
00:13:45.719  }
00:13:45.719  ]
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@911 -- # return 0
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2
00:13:45.719   13:50:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:45.979  Running I/O for 5 seconds...
00:13:47.852      57344.00 IOPS,   224.00 MiB/s
[2024-12-11T13:50:30.624Z]  13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:47.852    13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD
00:13:47.852    13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:47.852    13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:47.852    13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{
00:13:47.852  "tick_rate": 2100000000,
00:13:47.852  "ticks": 1566843236982,
00:13:47.852  "bdevs": [
00:13:47.852  {
00:13:47.852  "name": "Malloc_QD",
00:13:47.852  "bytes_read": 445682176,
00:13:47.852  "num_read_ops": 108803,
00:13:47.852  "bytes_written": 0,
00:13:47.852  "num_write_ops": 0,
00:13:47.852  "bytes_unmapped": 0,
00:13:47.852  "num_unmap_ops": 0,
00:13:47.852  "bytes_copied": 0,
00:13:47.852  "num_copy_ops": 0,
00:13:47.852  "read_latency_ticks": 2033741214338,
00:13:47.852  "max_read_latency_ticks": 25900826,
00:13:47.852  "min_read_latency_ticks": 343064,
00:13:47.852  "write_latency_ticks": 0,
00:13:47.852  "max_write_latency_ticks": 0,
00:13:47.852  "min_write_latency_ticks": 0,
00:13:47.852  "unmap_latency_ticks": 0,
00:13:47.852  "max_unmap_latency_ticks": 0,
00:13:47.852  "min_unmap_latency_ticks": 0,
00:13:47.852  "copy_latency_ticks": 0,
00:13:47.852  "max_copy_latency_ticks": 0,
00:13:47.852  "min_copy_latency_ticks": 0,
00:13:47.852  "io_error": {},
00:13:47.852  "queue_depth_polling_period": 10,
00:13:47.852  "queue_depth": 512,
00:13:47.852  "io_time": 20,
00:13:47.852  "weighted_io_time": 10240
00:13:47.852  }
00:13:47.852  ]
00:13:47.852  }'
00:13:47.852    13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period'
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']'
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']'
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:47.852   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:47.852  
00:13:47.852                                                                                                  Latency(us)
00:13:47.852  
[2024-12-11T13:50:30.624Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:47.852  Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:13:47.852  	 Malloc_QD           :       1.97   24639.80      96.25       0.00     0.00   10350.57    1622.80   12358.22
00:13:47.852  Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:47.852  	 Malloc_QD           :       1.98   32792.91     128.10       0.00     0.00    7783.93     807.50   11234.74
00:13:47.852  
[2024-12-11T13:50:30.624Z]  ===================================================================================================================
00:13:47.852  
[2024-12-11T13:50:30.624Z]  Total                       :              57432.71     224.35       0.00     0.00    8884.74     807.50   12358.22
00:13:48.112  {
00:13:48.112    "results": [
00:13:48.112      {
00:13:48.112        "job": "Malloc_QD",
00:13:48.112        "core_mask": "0x1",
00:13:48.112        "workload": "randread",
00:13:48.112        "status": "finished",
00:13:48.112        "queue_depth": 256,
00:13:48.112        "io_size": 4096,
00:13:48.112        "runtime": 1.974042,
00:13:48.112        "iops": 24639.79996372924,
00:13:48.112        "mibps": 96.24921860831735,
00:13:48.112        "io_failed": 0,
00:13:48.112        "io_timeout": 0,
00:13:48.112        "avg_latency_us": 10350.565453634084,
00:13:48.112        "min_latency_us": 1622.7961904761905,
00:13:48.112        "max_latency_us": 12358.217142857144
00:13:48.112      },
00:13:48.112      {
00:13:48.112        "job": "Malloc_QD",
00:13:48.112        "core_mask": "0x2",
00:13:48.112        "workload": "randread",
00:13:48.112        "status": "finished",
00:13:48.112        "queue_depth": 256,
00:13:48.112        "io_size": 4096,
00:13:48.112        "runtime": 1.975061,
00:13:48.112        "iops": 32792.91120628679,
00:13:48.112        "mibps": 128.0973093995578,
00:13:48.112        "io_failed": 0,
00:13:48.112        "io_timeout": 0,
00:13:48.112        "avg_latency_us": 7783.9283869753435,
00:13:48.112        "min_latency_us": 807.4971428571429,
00:13:48.112        "max_latency_us": 11234.742857142857
00:13:48.112      }
00:13:48.112    ],
00:13:48.112    "core_count": 2
00:13:48.112  }
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 74773
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' -z 74773 ']'
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # kill -0 74773
00:13:48.112    13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # uname
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:48.112    13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74773
00:13:48.112  killing process with pid 74773
00:13:48.112  Received shutdown signal, test time was about 2.155732 seconds
00:13:48.112  
00:13:48.112                                                                                                  Latency(us)
00:13:48.112  
[2024-12-11T13:50:30.884Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:48.112  
[2024-12-11T13:50:30.884Z]  ===================================================================================================================
00:13:48.112  
[2024-12-11T13:50:30.884Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74773'
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # kill 74773
00:13:48.112   13:50:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@978 -- # wait 74773
00:13:50.014  ************************************
00:13:50.014  END TEST bdev_qd_sampling
00:13:50.014  ************************************
00:13:50.014   13:50:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT
00:13:50.014  
00:13:50.014  real	0m5.030s
00:13:50.014  user	0m9.221s
00:13:50.014  sys	0m0.616s
00:13:50.014   13:50:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:50.014   13:50:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:13:50.014   13:50:32 blockdev_general -- bdev/blockdev.sh@827 -- # run_test bdev_error error_test_suite ''
00:13:50.014   13:50:32 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:50.014   13:50:32 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:50.014   13:50:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:13:50.014  ************************************
00:13:50.014  START TEST bdev_error
00:13:50.014  ************************************
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@1129 -- # error_test_suite ''
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=74856
00:13:50.014  Process error testing pid: 74856
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 74856'
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f ''
00:13:50.014   13:50:32 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 74856
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 74856 ']'
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:50.014  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:50.014   13:50:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:50.014  [2024-12-11 13:50:32.415004] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:50.014  [2024-12-11 13:50:32.415155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74856 ]
00:13:50.014  [2024-12-11 13:50:32.589790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:50.014  [2024-12-11 13:50:32.717841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:13:50.580   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:50.580   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:13:50.580   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:13:50.580   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.580   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:50.839  Dev_1
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.839   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:50.839  [
00:13:50.839  {
00:13:50.839  "name": "Dev_1",
00:13:50.839  "aliases": [
00:13:50.839  "f5c05277-7b69-4e76-8ff8-cdba136c6865"
00:13:50.839  ],
00:13:50.839  "product_name": "Malloc disk",
00:13:50.839  "block_size": 512,
00:13:50.839  "num_blocks": 262144,
00:13:50.839  "uuid": "f5c05277-7b69-4e76-8ff8-cdba136c6865",
00:13:50.839  "assigned_rate_limits": {
00:13:50.839  "rw_ios_per_sec": 0,
00:13:50.839  "rw_mbytes_per_sec": 0,
00:13:50.839  "r_mbytes_per_sec": 0,
00:13:50.839  "w_mbytes_per_sec": 0
00:13:50.839  },
00:13:50.839  "claimed": false,
00:13:50.839  "zoned": false,
00:13:50.839  "supported_io_types": {
00:13:50.839  "read": true,
00:13:50.839  "write": true,
00:13:50.839  "unmap": true,
00:13:50.839  "flush": true,
00:13:50.839  "reset": true,
00:13:50.839  "nvme_admin": false,
00:13:50.839  "nvme_io": false,
00:13:50.839  "nvme_io_md": false,
00:13:50.839  "write_zeroes": true,
00:13:50.839  "zcopy": true,
00:13:50.839  "get_zone_info": false,
00:13:50.839  "zone_management": false,
00:13:50.839  "zone_append": false,
00:13:50.839  "compare": false,
00:13:50.839  "compare_and_write": false,
00:13:50.839  "abort": true,
00:13:50.839  "seek_hole": false,
00:13:50.839  "seek_data": false,
00:13:50.839  "copy": true,
00:13:50.839  "nvme_iov_md": false
00:13:50.839  },
00:13:50.839  "memory_domains": [
00:13:50.839  {
00:13:50.839  "dma_device_id": "system",
00:13:50.839  "dma_device_type": 1
00:13:50.839  },
00:13:50.839  {
00:13:50.839  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:50.839  "dma_device_type": 2
00:13:50.839  }
00:13:50.839  ],
00:13:50.839  "driver_specific": {}
00:13:50.839  }
00:13:50.839  ]
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:13:50.839   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:50.839  true
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.839   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.839   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:51.097  Dev_2
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.097   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:51.097  [
00:13:51.097  {
00:13:51.097  "name": "Dev_2",
00:13:51.097  "aliases": [
00:13:51.097  "01308b15-0640-4b92-be3f-bf00dbdf41b5"
00:13:51.097  ],
00:13:51.097  "product_name": "Malloc disk",
00:13:51.097  "block_size": 512,
00:13:51.097  "num_blocks": 262144,
00:13:51.097  "uuid": "01308b15-0640-4b92-be3f-bf00dbdf41b5",
00:13:51.097  "assigned_rate_limits": {
00:13:51.097  "rw_ios_per_sec": 0,
00:13:51.097  "rw_mbytes_per_sec": 0,
00:13:51.097  "r_mbytes_per_sec": 0,
00:13:51.097  "w_mbytes_per_sec": 0
00:13:51.097  },
00:13:51.097  "claimed": false,
00:13:51.097  "zoned": false,
00:13:51.097  "supported_io_types": {
00:13:51.097  "read": true,
00:13:51.097  "write": true,
00:13:51.097  "unmap": true,
00:13:51.097  "flush": true,
00:13:51.097  "reset": true,
00:13:51.097  "nvme_admin": false,
00:13:51.097  "nvme_io": false,
00:13:51.097  "nvme_io_md": false,
00:13:51.097  "write_zeroes": true,
00:13:51.097  "zcopy": true,
00:13:51.097  "get_zone_info": false,
00:13:51.097  "zone_management": false,
00:13:51.097  "zone_append": false,
00:13:51.097  "compare": false,
00:13:51.097  "compare_and_write": false,
00:13:51.097  "abort": true,
00:13:51.097  "seek_hole": false,
00:13:51.097  "seek_data": false,
00:13:51.097  "copy": true,
00:13:51.097  "nvme_iov_md": false
00:13:51.097  },
00:13:51.097  "memory_domains": [
00:13:51.097  {
00:13:51.097  "dma_device_id": "system",
00:13:51.097  "dma_device_type": 1
00:13:51.097  },
00:13:51.097  {
00:13:51.097  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:51.097  "dma_device_type": 2
00:13:51.097  }
00:13:51.097  ],
00:13:51.097  "driver_specific": {}
00:13:51.097  }
00:13:51.097  ]
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:13:51.097   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:51.097   13:50:33 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.097   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1
00:13:51.097   13:50:33 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:13:51.097  Running I/O for 5 seconds...
00:13:52.032  Process is existed as continue on error is set. Pid: 74856
00:13:52.032   13:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 74856
00:13:52.032   13:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 74856'
00:13:52.032   13:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1
00:13:52.032   13:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:52.032   13:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:52.032   13:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:52.032   13:50:34 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1
00:13:52.032   13:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:52.032   13:50:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:52.032  Timeout while waiting for response:
00:13:52.032  
00:13:52.032  
00:13:52.598   13:50:35 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:52.598   13:50:35 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5
00:13:53.533      72683.00 IOPS,   283.92 MiB/s
[2024-12-11T13:50:37.242Z]     85229.50 IOPS,   332.93 MiB/s
[2024-12-11T13:50:38.177Z]     90174.33 IOPS,   352.24 MiB/s
[2024-12-11T13:50:39.113Z]     92914.75 IOPS,   362.95 MiB/s
00:13:56.341                                                                                                  Latency(us)
00:13:56.341  
[2024-12-11T13:50:39.113Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:56.341  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:56.341  	 EE_Dev_1            :       0.91   39819.30     155.54       5.52     0.00     398.81     148.24     799.70
00:13:56.341  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:56.341  	 Dev_2               :       5.00   80575.19     314.75       0.00     0.00     195.76      67.29  405449.39
00:13:56.341  
[2024-12-11T13:50:39.113Z]  ===================================================================================================================
00:13:56.341  
[2024-12-11T13:50:39.113Z]  Total                       :             120394.49     470.29       5.52     0.00     212.44      67.29  405449.39
00:13:57.717   13:50:40 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 74856
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' -z 74856 ']'
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # kill -0 74856
00:13:57.717    13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # uname
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:57.717    13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74856
00:13:57.717  killing process with pid 74856
00:13:57.717  Received shutdown signal, test time was about 5.000000 seconds
00:13:57.717  
00:13:57.717                                                                                                  Latency(us)
00:13:57.717  
[2024-12-11T13:50:40.489Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:57.717  
[2024-12-11T13:50:40.489Z]  ===================================================================================================================
00:13:57.717  
[2024-12-11T13:50:40.489Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74856'
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # kill 74856
00:13:57.717   13:50:40 blockdev_general.bdev_error -- common/autotest_common.sh@978 -- # wait 74856
00:13:59.111  Process error testing pid: 74968
00:13:59.111   13:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=74968
00:13:59.111   13:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 74968'
00:13:59.111   13:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 ''
00:13:59.111   13:50:41 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 74968
00:13:59.111   13:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 74968 ']'
00:13:59.111   13:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:59.111   13:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:59.111  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:59.111   13:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:59.111   13:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:59.111   13:50:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:13:59.369  [2024-12-11 13:50:41.893007] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:13:59.369  [2024-12-11 13:50:41.893209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74968 ]
00:13:59.369  [2024-12-11 13:50:42.091305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:59.628  [2024-12-11 13:50:42.225435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:00.197   13:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:00.197   13:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:14:00.197   13:50:42 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:14:00.197   13:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.197   13:50:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.456  Dev_1
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.456   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.456   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.456  [
00:14:00.456  {
00:14:00.456  "name": "Dev_1",
00:14:00.456  "aliases": [
00:14:00.456  "c4604a98-c0a7-4881-a868-91897bba73ad"
00:14:00.456  ],
00:14:00.456  "product_name": "Malloc disk",
00:14:00.456  "block_size": 512,
00:14:00.456  "num_blocks": 262144,
00:14:00.456  "uuid": "c4604a98-c0a7-4881-a868-91897bba73ad",
00:14:00.456  "assigned_rate_limits": {
00:14:00.456  "rw_ios_per_sec": 0,
00:14:00.456  "rw_mbytes_per_sec": 0,
00:14:00.456  "r_mbytes_per_sec": 0,
00:14:00.456  "w_mbytes_per_sec": 0
00:14:00.456  },
00:14:00.456  "claimed": false,
00:14:00.456  "zoned": false,
00:14:00.456  "supported_io_types": {
00:14:00.456  "read": true,
00:14:00.456  "write": true,
00:14:00.456  "unmap": true,
00:14:00.456  "flush": true,
00:14:00.456  "reset": true,
00:14:00.456  "nvme_admin": false,
00:14:00.456  "nvme_io": false,
00:14:00.456  "nvme_io_md": false,
00:14:00.456  "write_zeroes": true,
00:14:00.456  "zcopy": true,
00:14:00.456  "get_zone_info": false,
00:14:00.456  "zone_management": false,
00:14:00.456  "zone_append": false,
00:14:00.456  "compare": false,
00:14:00.456  "compare_and_write": false,
00:14:00.456  "abort": true,
00:14:00.456  "seek_hole": false,
00:14:00.456  "seek_data": false,
00:14:00.456  "copy": true,
00:14:00.456  "nvme_iov_md": false
00:14:00.456  },
00:14:00.456  "memory_domains": [
00:14:00.456  {
00:14:00.456  "dma_device_id": "system",
00:14:00.456  "dma_device_type": 1
00:14:00.456  },
00:14:00.456  {
00:14:00.456  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:00.456  "dma_device_type": 2
00:14:00.457  }
00:14:00.457  ],
00:14:00.457  "driver_specific": {}
00:14:00.457  }
00:14:00.457  ]
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:14:00.457   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.457  true
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.457   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.457   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.716  Dev_2
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.717   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.717  [
00:14:00.717  {
00:14:00.717  "name": "Dev_2",
00:14:00.717  "aliases": [
00:14:00.717  "97c9149a-e507-4071-8f77-a99d0352b4bc"
00:14:00.717  ],
00:14:00.717  "product_name": "Malloc disk",
00:14:00.717  "block_size": 512,
00:14:00.717  "num_blocks": 262144,
00:14:00.717  "uuid": "97c9149a-e507-4071-8f77-a99d0352b4bc",
00:14:00.717  "assigned_rate_limits": {
00:14:00.717  "rw_ios_per_sec": 0,
00:14:00.717  "rw_mbytes_per_sec": 0,
00:14:00.717  "r_mbytes_per_sec": 0,
00:14:00.717  "w_mbytes_per_sec": 0
00:14:00.717  },
00:14:00.717  "claimed": false,
00:14:00.717  "zoned": false,
00:14:00.717  "supported_io_types": {
00:14:00.717  "read": true,
00:14:00.717  "write": true,
00:14:00.717  "unmap": true,
00:14:00.717  "flush": true,
00:14:00.717  "reset": true,
00:14:00.717  "nvme_admin": false,
00:14:00.717  "nvme_io": false,
00:14:00.717  "nvme_io_md": false,
00:14:00.717  "write_zeroes": true,
00:14:00.717  "zcopy": true,
00:14:00.717  "get_zone_info": false,
00:14:00.717  "zone_management": false,
00:14:00.717  "zone_append": false,
00:14:00.717  "compare": false,
00:14:00.717  "compare_and_write": false,
00:14:00.717  "abort": true,
00:14:00.717  "seek_hole": false,
00:14:00.717  "seek_data": false,
00:14:00.717  "copy": true,
00:14:00.717  "nvme_iov_md": false
00:14:00.717  },
00:14:00.717  "memory_domains": [
00:14:00.717  {
00:14:00.717  "dma_device_id": "system",
00:14:00.717  "dma_device_type": 1
00:14:00.717  },
00:14:00.717  {
00:14:00.717  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:00.717  "dma_device_type": 2
00:14:00.717  }
00:14:00.717  ],
00:14:00.717  "driver_specific": {}
00:14:00.717  }
00:14:00.717  ]
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:14:00.717   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.717   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 74968
00:14:00.717   13:50:43 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # local es=0
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@654 -- # valid_exec_arg wait 74968
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # local arg=wait
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:00.717    13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # type -t wait
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:00.717   13:50:43 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # wait 74968
00:14:00.977  Running I/O for 5 seconds...
00:14:00.977  task offset: 131536 on job bdev=EE_Dev_1 fails
00:14:00.977  
00:14:00.977                                                                                                  Latency(us)
00:14:00.977  
[2024-12-11T13:50:43.749Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:00.977  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:14:00.977  Job: EE_Dev_1 ended in about 0.00 seconds with error
00:14:00.977  	 EE_Dev_1            :       0.00   23706.90      92.61    5387.93     0.00     436.89     164.82     811.40
00:14:00.977  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:14:00.977  	 Dev_2               :       0.00   18680.68      72.97       0.00     0.00     570.97     168.72    1037.65
00:14:00.977  
[2024-12-11T13:50:43.749Z]  ===================================================================================================================
00:14:00.977  
[2024-12-11T13:50:43.749Z]  Total                       :              42387.57     165.58    5387.93     0.00     509.61     164.82    1037.65
00:14:00.977  request:
00:14:00.977  {
00:14:00.977    "method": "perform_tests",
00:14:00.977    "req_id": 1
00:14:00.977  }
00:14:00.977  Got JSON-RPC error response
00:14:00.977  response:
00:14:00.977  {
00:14:00.977    "code": -32603,
00:14:00.977    "message": "bdevperf failed with error Operation not permitted"
00:14:00.977  }
00:14:00.977  [2024-12-11 13:50:43.525451] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:02.884   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # es=255
00:14:02.884   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:02.884  ************************************
00:14:02.884  END TEST bdev_error
00:14:02.884  ************************************
00:14:02.884   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@664 -- # es=127
00:14:02.884   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@665 -- # case "$es" in
00:14:02.884   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@672 -- # es=1
00:14:02.884   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:02.884  
00:14:02.884  real	0m13.305s
00:14:02.884  user	0m13.312s
00:14:02.884  sys	0m1.112s
00:14:02.885   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:02.885   13:50:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:14:03.144   13:50:45 blockdev_general -- bdev/blockdev.sh@828 -- # run_test bdev_stat stat_test_suite ''
00:14:03.144   13:50:45 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:03.144   13:50:45 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:03.144   13:50:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:14:03.144  ************************************
00:14:03.144  START TEST bdev_stat
00:14:03.144  ************************************
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@1129 -- # stat_test_suite ''
00:14:03.144  Process Bdev IO statistics testing pid: 75032
00:14:03.144  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=75032
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 75032'
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 75032
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C ''
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # '[' -z 75032 ']'
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:03.144   13:50:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:03.144  [2024-12-11 13:50:45.809328] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:03.144  [2024-12-11 13:50:45.809538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75032 ]
00:14:03.404  [2024-12-11 13:50:46.021033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:03.663  [2024-12-11 13:50:46.224912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:03.663  [2024-12-11 13:50:46.224942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@868 -- # return 0
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:04.230  Malloc_STAT
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_STAT
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # local i
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:04.230  [
00:14:04.230  {
00:14:04.230  "name": "Malloc_STAT",
00:14:04.230  "aliases": [
00:14:04.230  "9e708272-ca6c-4f9e-854b-5a4725df442c"
00:14:04.230  ],
00:14:04.230  "product_name": "Malloc disk",
00:14:04.230  "block_size": 512,
00:14:04.230  "num_blocks": 262144,
00:14:04.230  "uuid": "9e708272-ca6c-4f9e-854b-5a4725df442c",
00:14:04.230  "assigned_rate_limits": {
00:14:04.230  "rw_ios_per_sec": 0,
00:14:04.230  "rw_mbytes_per_sec": 0,
00:14:04.230  "r_mbytes_per_sec": 0,
00:14:04.230  "w_mbytes_per_sec": 0
00:14:04.230  },
00:14:04.230  "claimed": false,
00:14:04.230  "zoned": false,
00:14:04.230  "supported_io_types": {
00:14:04.230  "read": true,
00:14:04.230  "write": true,
00:14:04.230  "unmap": true,
00:14:04.230  "flush": true,
00:14:04.230  "reset": true,
00:14:04.230  "nvme_admin": false,
00:14:04.230  "nvme_io": false,
00:14:04.230  "nvme_io_md": false,
00:14:04.230  "write_zeroes": true,
00:14:04.230  "zcopy": true,
00:14:04.230  "get_zone_info": false,
00:14:04.230  "zone_management": false,
00:14:04.230  "zone_append": false,
00:14:04.230  "compare": false,
00:14:04.230  "compare_and_write": false,
00:14:04.230  "abort": true,
00:14:04.230  "seek_hole": false,
00:14:04.230  "seek_data": false,
00:14:04.230  "copy": true,
00:14:04.230  "nvme_iov_md": false
00:14:04.230  },
00:14:04.230  "memory_domains": [
00:14:04.230  {
00:14:04.230  "dma_device_id": "system",
00:14:04.230  "dma_device_type": 1
00:14:04.230  },
00:14:04.230  {
00:14:04.230  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:04.230  "dma_device_type": 2
00:14:04.230  }
00:14:04.230  ],
00:14:04.230  "driver_specific": {}
00:14:04.230  }
00:14:04.230  ]
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- common/autotest_common.sh@911 -- # return 0
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2
00:14:04.230   13:50:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:14:04.488  Running I/O for 10 seconds...
00:14:06.392      96512.00 IOPS,   377.00 MiB/s
[2024-12-11T13:50:49.164Z]  13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0
00:14:06.392    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:14:06.392    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.392    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:06.392    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.392   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{
00:14:06.392  "tick_rate": 2100000000,
00:14:06.392  "ticks": 1605732331856,
00:14:06.392  "bdevs": [
00:14:06.392  {
00:14:06.392  "name": "Malloc_STAT",
00:14:06.392  "bytes_read": 743477760,
00:14:06.392  "num_read_ops": 181507,
00:14:06.392  "bytes_written": 0,
00:14:06.392  "num_write_ops": 0,
00:14:06.392  "bytes_unmapped": 0,
00:14:06.392  "num_unmap_ops": 0,
00:14:06.392  "bytes_copied": 0,
00:14:06.392  "num_copy_ops": 0,
00:14:06.392  "read_latency_ticks": 2053003927012,
00:14:06.392  "max_read_latency_ticks": 14343772,
00:14:06.392  "min_read_latency_ticks": 311560,
00:14:06.392  "write_latency_ticks": 0,
00:14:06.392  "max_write_latency_ticks": 0,
00:14:06.392  "min_write_latency_ticks": 0,
00:14:06.392  "unmap_latency_ticks": 0,
00:14:06.392  "max_unmap_latency_ticks": 0,
00:14:06.392  "min_unmap_latency_ticks": 0,
00:14:06.392  "copy_latency_ticks": 0,
00:14:06.392  "max_copy_latency_ticks": 0,
00:14:06.392  "min_copy_latency_ticks": 0,
00:14:06.393  "io_error": {}
00:14:06.393  }
00:14:06.393  ]
00:14:06.393  }'
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops'
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=181507
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{
00:14:06.393  "tick_rate": 2100000000,
00:14:06.393  "ticks": 1605814914052,
00:14:06.393  "name": "Malloc_STAT",
00:14:06.393  "channels": [
00:14:06.393  {
00:14:06.393  "thread_id": 2,
00:14:06.393  "bytes_read": 376438784,
00:14:06.393  "num_read_ops": 91904,
00:14:06.393  "bytes_written": 0,
00:14:06.393  "num_write_ops": 0,
00:14:06.393  "bytes_unmapped": 0,
00:14:06.393  "num_unmap_ops": 0,
00:14:06.393  "bytes_copied": 0,
00:14:06.393  "num_copy_ops": 0,
00:14:06.393  "read_latency_ticks": 1046149687344,
00:14:06.393  "max_read_latency_ticks": 14343772,
00:14:06.393  "min_read_latency_ticks": 8054562,
00:14:06.393  "write_latency_ticks": 0,
00:14:06.393  "max_write_latency_ticks": 0,
00:14:06.393  "min_write_latency_ticks": 0,
00:14:06.393  "unmap_latency_ticks": 0,
00:14:06.393  "max_unmap_latency_ticks": 0,
00:14:06.393  "min_unmap_latency_ticks": 0,
00:14:06.393  "copy_latency_ticks": 0,
00:14:06.393  "max_copy_latency_ticks": 0,
00:14:06.393  "min_copy_latency_ticks": 0
00:14:06.393  },
00:14:06.393  {
00:14:06.393  "thread_id": 3,
00:14:06.393  "bytes_read": 381681664,
00:14:06.393  "num_read_ops": 93184,
00:14:06.393  "bytes_written": 0,
00:14:06.393  "num_write_ops": 0,
00:14:06.393  "bytes_unmapped": 0,
00:14:06.393  "num_unmap_ops": 0,
00:14:06.393  "bytes_copied": 0,
00:14:06.393  "num_copy_ops": 0,
00:14:06.393  "read_latency_ticks": 1048572848988,
00:14:06.393  "max_read_latency_ticks": 14234594,
00:14:06.393  "min_read_latency_ticks": 7686024,
00:14:06.393  "write_latency_ticks": 0,
00:14:06.393  "max_write_latency_ticks": 0,
00:14:06.393  "min_write_latency_ticks": 0,
00:14:06.393  "unmap_latency_ticks": 0,
00:14:06.393  "max_unmap_latency_ticks": 0,
00:14:06.393  "min_unmap_latency_ticks": 0,
00:14:06.393  "copy_latency_ticks": 0,
00:14:06.393  "max_copy_latency_ticks": 0,
00:14:06.393  "min_copy_latency_ticks": 0
00:14:06.393  }
00:14:06.393  ]
00:14:06.393  }'
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops'
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=91904
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=91904
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops'
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=93184
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=185088
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:06.393      94848.00 IOPS,   370.50 MiB/s
[2024-12-11T13:50:49.165Z]   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{
00:14:06.393  "tick_rate": 2100000000,
00:14:06.393  "ticks": 1605933155068,
00:14:06.393  "bdevs": [
00:14:06.393  {
00:14:06.393  "name": "Malloc_STAT",
00:14:06.393  "bytes_read": 779129344,
00:14:06.393  "num_read_ops": 190211,
00:14:06.393  "bytes_written": 0,
00:14:06.393  "num_write_ops": 0,
00:14:06.393  "bytes_unmapped": 0,
00:14:06.393  "num_unmap_ops": 0,
00:14:06.393  "bytes_copied": 0,
00:14:06.393  "num_copy_ops": 0,
00:14:06.393  "read_latency_ticks": 2155288088450,
00:14:06.393  "max_read_latency_ticks": 15676626,
00:14:06.393  "min_read_latency_ticks": 311560,
00:14:06.393  "write_latency_ticks": 0,
00:14:06.393  "max_write_latency_ticks": 0,
00:14:06.393  "min_write_latency_ticks": 0,
00:14:06.393  "unmap_latency_ticks": 0,
00:14:06.393  "max_unmap_latency_ticks": 0,
00:14:06.393  "min_unmap_latency_ticks": 0,
00:14:06.393  "copy_latency_ticks": 0,
00:14:06.393  "max_copy_latency_ticks": 0,
00:14:06.393  "min_copy_latency_ticks": 0,
00:14:06.393  "io_error": {}
00:14:06.393  }
00:14:06.393  ]
00:14:06.393  }'
00:14:06.393    13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops'
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=190211
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 185088 -lt 181507 ']'
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 185088 -gt 190211 ']'
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.393   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:06.393  
00:14:06.393                                                                                                  Latency(us)
00:14:06.393  
[2024-12-11T13:50:49.165Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:06.393  Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:14:06.393  	 Malloc_STAT         :       2.06   47083.40     183.92       0.00     0.00    5423.22    1131.28    7489.83
00:14:06.393  Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:14:06.393  	 Malloc_STAT         :       2.06   47676.76     186.24       0.00     0.00    5356.15     830.90    7177.75
00:14:06.393  
[2024-12-11T13:50:49.165Z]  ===================================================================================================================
00:14:06.393  
[2024-12-11T13:50:49.165Z]  Total                       :              94760.16     370.16       0.00     0.00    5389.46     830.90    7489.83
00:14:06.652  {
00:14:06.652    "results": [
00:14:06.652      {
00:14:06.652        "job": "Malloc_STAT",
00:14:06.652        "core_mask": "0x1",
00:14:06.652        "workload": "randread",
00:14:06.652        "status": "finished",
00:14:06.652        "queue_depth": 256,
00:14:06.652        "io_size": 4096,
00:14:06.652        "runtime": 2.060684,
00:14:06.652        "iops": 47083.39561038956,
00:14:06.652        "mibps": 183.91951410308423,
00:14:06.652        "io_failed": 0,
00:14:06.652        "io_timeout": 0,
00:14:06.652        "avg_latency_us": 5423.223621057922,
00:14:06.652        "min_latency_us": 1131.2761904761905,
00:14:06.652        "max_latency_us": 7489.828571428571
00:14:06.652      },
00:14:06.652      {
00:14:06.652        "job": "Malloc_STAT",
00:14:06.652        "core_mask": "0x2",
00:14:06.652        "workload": "randread",
00:14:06.652        "status": "finished",
00:14:06.652        "queue_depth": 256,
00:14:06.652        "io_size": 4096,
00:14:06.652        "runtime": 2.061885,
00:14:06.652        "iops": 47676.761798063424,
00:14:06.652        "mibps": 186.23735077368525,
00:14:06.652        "io_failed": 0,
00:14:06.652        "io_timeout": 0,
00:14:06.652        "avg_latency_us": 5356.145694444444,
00:14:06.652        "min_latency_us": 830.9028571428571,
00:14:06.652        "max_latency_us": 7177.752380952381
00:14:06.652      }
00:14:06.652    ],
00:14:06.652    "core_count": 2
00:14:06.652  }
00:14:06.652   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.652   13:50:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 75032
00:14:06.652   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' -z 75032 ']'
00:14:06.652   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # kill -0 75032
00:14:06.652    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # uname
00:14:06.652   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:06.652    13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75032
00:14:06.652   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:06.653   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:06.653  killing process with pid 75032
00:14:06.653   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75032'
00:14:06.653   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # kill 75032
00:14:06.653  Received shutdown signal, test time was about 2.244730 seconds
00:14:06.653  
00:14:06.653                                                                                                  Latency(us)
00:14:06.653  
[2024-12-11T13:50:49.425Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:06.653  
[2024-12-11T13:50:49.425Z]  ===================================================================================================================
00:14:06.653  
[2024-12-11T13:50:49.425Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:14:06.653   13:50:49 blockdev_general.bdev_stat -- common/autotest_common.sh@978 -- # wait 75032
00:14:08.556   13:50:50 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT
00:14:08.556  
00:14:08.556  real	0m5.225s
00:14:08.556  user	0m9.709s
00:14:08.556  sys	0m0.681s
00:14:08.556   13:50:50 blockdev_general.bdev_stat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:08.556   13:50:50 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:14:08.556  ************************************
00:14:08.556  END TEST bdev_stat
00:14:08.556  ************************************
00:14:08.556   13:50:50 blockdev_general -- bdev/blockdev.sh@829 -- # run_test bdev_dif_insert_strip dif_insert_strip_test_suite ''
00:14:08.556   13:50:50 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:08.556   13:50:50 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:08.556   13:50:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:14:08.556  ************************************
00:14:08.556  START TEST bdev_dif_insert_strip
00:14:08.556  ************************************
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@1129 -- # dif_insert_strip_test_suite ''
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@615 -- # DIF_DEV_1=Malloc_DIF_1
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@616 -- # DIF_DEV_2=Malloc_DIF_2
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@617 -- # DIF_DEV_3=Malloc_DIF_3
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@620 -- # DIF_PID=75128
00:14:08.556  Process bdev DIF insert/strip testing pid: 75128
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@621 -- # echo 'Process bdev DIF insert/strip testing pid: 75128'
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@622 -- # trap 'cleanup; killprocess $DIF_PID; exit 1' SIGINT SIGTERM EXIT
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@619 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0xf -q 32 -o 4096 -w randrw -M 50 -t 5 -C -N ''
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@623 -- # waitforlisten 75128
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@835 -- # '[' -z 75128 ']'
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:08.556  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:08.556   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:08.556  [2024-12-11 13:50:51.087949] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:08.556  [2024-12-11 13:50:51.088149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75128 ]
00:14:08.556  [2024-12-11 13:50:51.285534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:14:08.815  [2024-12-11 13:50:51.434975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:08.815  [2024-12-11 13:50:51.435151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:14:08.815  [2024-12-11 13:50:51.435295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:08.815  [2024-12-11 13:50:51.435321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:14:09.380   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:09.380   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@868 -- # return 0
00:14:09.380   13:50:51 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_malloc_create -b Malloc_DIF_1 1 512 -m 8 -t 1 -f 0 -i
00:14:09.380   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.380   13:50:51 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.380  Malloc_DIF_1
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@626 -- # waitforbdev Malloc_DIF_1
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_DIF_1
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@905 -- # local i
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_DIF_1 -t 2000
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.380  [
00:14:09.380  {
00:14:09.380  "name": "Malloc_DIF_1",
00:14:09.380  "aliases": [
00:14:09.380  "2afe9a66-2c52-486c-b2db-d9c5138a9ef2"
00:14:09.380  ],
00:14:09.380  "product_name": "Malloc disk",
00:14:09.380  "block_size": 520,
00:14:09.380  "num_blocks": 2048,
00:14:09.380  "uuid": "2afe9a66-2c52-486c-b2db-d9c5138a9ef2",
00:14:09.380  "md_size": 8,
00:14:09.380  "md_interleave": true,
00:14:09.380  "dif_type": 1,
00:14:09.380  "dif_is_head_of_md": false,
00:14:09.380  "enabled_dif_check_types": {
00:14:09.380  "reftag": true,
00:14:09.380  "apptag": false,
00:14:09.380  "guard": true
00:14:09.380  },
00:14:09.380  "dif_pi_format": 0,
00:14:09.380  "assigned_rate_limits": {
00:14:09.380  "rw_ios_per_sec": 0,
00:14:09.380  "rw_mbytes_per_sec": 0,
00:14:09.380  "r_mbytes_per_sec": 0,
00:14:09.380  "w_mbytes_per_sec": 0
00:14:09.380  },
00:14:09.380  "claimed": false,
00:14:09.380  "zoned": false,
00:14:09.380  "supported_io_types": {
00:14:09.380  "read": true,
00:14:09.380  "write": true,
00:14:09.380  "unmap": true,
00:14:09.380  "flush": true,
00:14:09.380  "reset": true,
00:14:09.380  "nvme_admin": false,
00:14:09.380  "nvme_io": false,
00:14:09.380  "nvme_io_md": false,
00:14:09.380  "write_zeroes": true,
00:14:09.380  "zcopy": true,
00:14:09.380  "get_zone_info": false,
00:14:09.380  "zone_management": false,
00:14:09.380  "zone_append": false,
00:14:09.380  "compare": false,
00:14:09.380  "compare_and_write": false,
00:14:09.380  "abort": true,
00:14:09.380  "seek_hole": false,
00:14:09.380  "seek_data": false,
00:14:09.380  "copy": true,
00:14:09.380  "nvme_iov_md": false
00:14:09.380  },
00:14:09.380  "driver_specific": {}
00:14:09.380  }
00:14:09.380  ]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@911 -- # return 0
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@627 -- # rpc_cmd bdev_malloc_create -b Malloc_DIF_2 1 512 -m 16 -t 1 -f 0 -i
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.380  Malloc_DIF_2
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@628 -- # waitforbdev Malloc_DIF_2
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_DIF_2
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@905 -- # local i
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:14:09.380   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_DIF_2 -t 2000
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.381  [
00:14:09.381  {
00:14:09.381  "name": "Malloc_DIF_2",
00:14:09.381  "aliases": [
00:14:09.381  "797bbaf5-15b8-48e8-97c7-d5a31c9fc3fc"
00:14:09.381  ],
00:14:09.381  "product_name": "Malloc disk",
00:14:09.381  "block_size": 528,
00:14:09.381  "num_blocks": 2048,
00:14:09.381  "uuid": "797bbaf5-15b8-48e8-97c7-d5a31c9fc3fc",
00:14:09.381  "md_size": 16,
00:14:09.381  "md_interleave": true,
00:14:09.381  "dif_type": 1,
00:14:09.381  "dif_is_head_of_md": false,
00:14:09.381  "enabled_dif_check_types": {
00:14:09.381  "reftag": true,
00:14:09.381  "apptag": false,
00:14:09.381  "guard": true
00:14:09.381  },
00:14:09.381  "dif_pi_format": 0,
00:14:09.381  "assigned_rate_limits": {
00:14:09.381  "rw_ios_per_sec": 0,
00:14:09.381  "rw_mbytes_per_sec": 0,
00:14:09.381  "r_mbytes_per_sec": 0,
00:14:09.381  "w_mbytes_per_sec": 0
00:14:09.381  },
00:14:09.381  "claimed": false,
00:14:09.381  "zoned": false,
00:14:09.381  "supported_io_types": {
00:14:09.381  "read": true,
00:14:09.381  "write": true,
00:14:09.381  "unmap": true,
00:14:09.381  "flush": true,
00:14:09.381  "reset": true,
00:14:09.381  "nvme_admin": false,
00:14:09.381  "nvme_io": false,
00:14:09.381  "nvme_io_md": false,
00:14:09.381  "write_zeroes": true,
00:14:09.381  "zcopy": true,
00:14:09.381  "get_zone_info": false,
00:14:09.381  "zone_management": false,
00:14:09.381  "zone_append": false,
00:14:09.381  "compare": false,
00:14:09.381  "compare_and_write": false,
00:14:09.381  "abort": true,
00:14:09.381  "seek_hole": false,
00:14:09.381  "seek_data": false,
00:14:09.381  "copy": true,
00:14:09.381  "nvme_iov_md": false
00:14:09.381  },
00:14:09.381  "driver_specific": {}
00:14:09.381  }
00:14:09.381  ]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@911 -- # return 0
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@629 -- # rpc_cmd bdev_malloc_create -b Malloc_DIF_3 1 512 -m 16 -t 1 -f 0 -i -d
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.381  Malloc_DIF_3
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@630 -- # waitforbdev Malloc_DIF_3
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_DIF_3
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@905 -- # local i
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_DIF_3 -t 2000
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:09.381  [
00:14:09.381  {
00:14:09.381  "name": "Malloc_DIF_3",
00:14:09.381  "aliases": [
00:14:09.381  "747e30d4-7b87-42ee-be84-3934a3b9338a"
00:14:09.381  ],
00:14:09.381  "product_name": "Malloc disk",
00:14:09.381  "block_size": 528,
00:14:09.381  "num_blocks": 2048,
00:14:09.381  "uuid": "747e30d4-7b87-42ee-be84-3934a3b9338a",
00:14:09.381  "md_size": 16,
00:14:09.381  "md_interleave": true,
00:14:09.381  "dif_type": 1,
00:14:09.381  "dif_is_head_of_md": true,
00:14:09.381  "enabled_dif_check_types": {
00:14:09.381  "reftag": true,
00:14:09.381  "apptag": false,
00:14:09.381  "guard": true
00:14:09.381  },
00:14:09.381  "dif_pi_format": 0,
00:14:09.381  "assigned_rate_limits": {
00:14:09.381  "rw_ios_per_sec": 0,
00:14:09.381  "rw_mbytes_per_sec": 0,
00:14:09.381  "r_mbytes_per_sec": 0,
00:14:09.381  "w_mbytes_per_sec": 0
00:14:09.381  },
00:14:09.381  "claimed": false,
00:14:09.381  "zoned": false,
00:14:09.381  "supported_io_types": {
00:14:09.381  "read": true,
00:14:09.381  "write": true,
00:14:09.381  "unmap": true,
00:14:09.381  "flush": true,
00:14:09.381  "reset": true,
00:14:09.381  "nvme_admin": false,
00:14:09.381  "nvme_io": false,
00:14:09.381  "nvme_io_md": false,
00:14:09.381  "write_zeroes": true,
00:14:09.381  "zcopy": true,
00:14:09.381  "get_zone_info": false,
00:14:09.381  "zone_management": false,
00:14:09.381  "zone_append": false,
00:14:09.381  "compare": false,
00:14:09.381  "compare_and_write": false,
00:14:09.381  "abort": true,
00:14:09.381  "seek_hole": false,
00:14:09.381  "seek_data": false,
00:14:09.381  "copy": true,
00:14:09.381  "nvme_iov_md": false
00:14:09.381  },
00:14:09.381  "driver_specific": {}
00:14:09.381  }
00:14:09.381  ]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@911 -- # return 0
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@633 -- # sleep 10
00:14:09.381   13:50:52 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@632 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:14:09.640  Running I/O for 5 seconds...
00:14:11.993      53472.00 IOPS,   208.88 MiB/s
[2024-12-11T13:50:55.722Z]     52656.00 IOPS,   205.69 MiB/s
[2024-12-11T13:50:56.657Z]     49824.00 IOPS,   194.62 MiB/s
[2024-12-11T13:50:57.594Z]     47232.00 IOPS,   184.50 MiB/s
[2024-12-11T13:50:57.594Z]     45209.60 IOPS,   176.60 MiB/s
00:14:14.822                                                                                                  Latency(us)
00:14:14.822  
[2024-12-11T13:50:57.594Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:14.822  Job: Malloc_DIF_1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_1        :       5.02    3277.08      12.80       0.00     0.00    9744.57    1630.60   19972.88
00:14:14.822  Job: Malloc_DIF_1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_1        :       5.02    3864.30      15.09       0.00     0.00    8263.63    1630.60   10735.42
00:14:14.822  Job: Malloc_DIF_1 (Core Mask 0x4, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_1        :       5.01    3953.38      15.44       0.00     0.00    8077.21    1763.23   10922.67
00:14:14.822  Job: Malloc_DIF_1 (Core Mask 0x8, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_1        :       5.01    3924.81      15.33       0.00     0.00    8136.33    1739.82   11421.99
00:14:14.822  Job: Malloc_DIF_2 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_2        :       5.03    3271.31      12.78       0.00     0.00    9755.04    1529.17   21595.67
00:14:14.822  Job: Malloc_DIF_2 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_2        :       5.03    3857.71      15.07       0.00     0.00    8271.91    1599.39   14917.24
00:14:14.822  Job: Malloc_DIF_2 (Core Mask 0x4, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_2        :       5.02    3953.89      15.44       0.00     0.00    8069.75    1497.97   11172.33
00:14:14.822  Job: Malloc_DIF_2 (Core Mask 0x8, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_2        :       5.02    3923.60      15.33       0.00     0.00    8132.74    1630.60   10922.67
00:14:14.822  Job: Malloc_DIF_3 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_3        :       5.04    3266.60      12.76       0.00     0.00    9762.13    1622.80   19473.55
00:14:14.822  Job: Malloc_DIF_3 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_3        :       5.03    3851.50      15.04       0.00     0.00    8278.86    1654.00   20097.71
00:14:14.822  Job: Malloc_DIF_3 (Core Mask 0x4, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_3        :       5.03    3947.63      15.42       0.00     0.00    8076.29    1638.40   11297.16
00:14:14.822  Job: Malloc_DIF_3 (Core Mask 0x8, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:14:14.822  	 Malloc_DIF_3        :       5.02    3922.38      15.32       0.00     0.00    8129.27    1654.00   11172.33
00:14:14.822  
[2024-12-11T13:50:57.594Z]  ===================================================================================================================
00:14:14.822  
[2024-12-11T13:50:57.594Z]  Total                       :              45014.18     175.84       0.00     0.00    8506.99    1497.97   21595.67
00:14:14.822  {
00:14:14.822    "results": [
00:14:14.822      {
00:14:14.822        "job": "Malloc_DIF_1",
00:14:14.822        "core_mask": "0x1",
00:14:14.822        "workload": "randrw",
00:14:14.822        "percentage": 50,
00:14:14.822        "status": "finished",
00:14:14.822        "queue_depth": 32,
00:14:14.822        "io_size": 4096,
00:14:14.822        "runtime": 5.019108,
00:14:14.822        "iops": 3277.076325115937,
00:14:14.822        "mibps": 12.801079394984129,
00:14:14.822        "io_failed": 0,
00:14:14.822        "io_timeout": 0,
00:14:14.822        "avg_latency_us": 9744.570272373541,
00:14:14.822        "min_latency_us": 1630.5980952380953,
00:14:14.822        "max_latency_us": 19972.876190476192
00:14:14.822      },
00:14:14.822      {
00:14:14.822        "job": "Malloc_DIF_1",
00:14:14.822        "core_mask": "0x2",
00:14:14.822        "workload": "randrw",
00:14:14.822        "percentage": 50,
00:14:14.822        "status": "finished",
00:14:14.822        "queue_depth": 32,
00:14:14.822        "io_size": 4096,
00:14:14.822        "runtime": 5.018242,
00:14:14.822        "iops": 3864.3014824713514,
00:14:14.822        "mibps": 15.094927665903716,
00:14:14.822        "io_failed": 0,
00:14:14.822        "io_timeout": 0,
00:14:14.822        "avg_latency_us": 8263.630312745561,
00:14:14.822        "min_latency_us": 1630.5980952380953,
00:14:14.822        "max_latency_us": 10735.420952380953
00:14:14.822      },
00:14:14.822      {
00:14:14.822        "job": "Malloc_DIF_1",
00:14:14.822        "core_mask": "0x4",
00:14:14.822        "workload": "randrw",
00:14:14.822        "percentage": 50,
00:14:14.822        "status": "finished",
00:14:14.822        "queue_depth": 32,
00:14:14.822        "io_size": 4096,
00:14:14.822        "runtime": 5.010391,
00:14:14.822        "iops": 3953.3840772107405,
00:14:14.822        "mibps": 15.442906551604455,
00:14:14.822        "io_failed": 0,
00:14:14.822        "io_timeout": 0,
00:14:14.822        "avg_latency_us": 8077.20746518963,
00:14:14.822        "min_latency_us": 1763.230476190476,
00:14:14.822        "max_latency_us": 10922.666666666666
00:14:14.822      },
00:14:14.822      {
00:14:14.822        "job": "Malloc_DIF_1",
00:14:14.822        "core_mask": "0x8",
00:14:14.822        "workload": "randrw",
00:14:14.822        "percentage": 50,
00:14:14.822        "status": "finished",
00:14:14.822        "queue_depth": 32,
00:14:14.822        "io_size": 4096,
00:14:14.822        "runtime": 5.014261,
00:14:14.822        "iops": 3924.805669270108,
00:14:14.822        "mibps": 15.33127214558636,
00:14:14.822        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8136.33491598916,
00:14:14.823        "min_latency_us": 1739.824761904762,
00:14:14.823        "max_latency_us": 11421.988571428572
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_2",
00:14:14.823        "core_mask": "0x1",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.027962,
00:14:14.823        "iops": 3271.3055508374964,
00:14:14.823        "mibps": 12.77853730795897,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 9755.04080044469,
00:14:14.823        "min_latency_us": 1529.1733333333334,
00:14:14.823        "max_latency_us": 21595.67238095238
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_2",
00:14:14.823        "core_mask": "0x2",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.02682,
00:14:14.823        "iops": 3857.707258266658,
00:14:14.823        "mibps": 15.069168977604132,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8271.914807480747,
00:14:14.823        "min_latency_us": 1599.3904761904762,
00:14:14.823        "max_latency_us": 14917.241904761904
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_2",
00:14:14.823        "core_mask": "0x4",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.017841,
00:14:14.823        "iops": 3953.8917235520216,
00:14:14.823        "mibps": 15.444889545125084,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8069.748792626728,
00:14:14.823        "min_latency_us": 1497.9657142857143,
00:14:14.823        "max_latency_us": 11172.327619047619
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_2",
00:14:14.823        "core_mask": "0x8",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.015802,
00:14:14.823        "iops": 3923.5998550182003,
00:14:14.823        "mibps": 15.326561933664845,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8132.739617499032,
00:14:14.823        "min_latency_us": 1630.5980952380953,
00:14:14.823        "max_latency_us": 10922.666666666666
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_3",
00:14:14.823        "core_mask": "0x1",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.035206,
00:14:14.823        "iops": 3266.5992215611436,
00:14:14.823        "mibps": 12.760153209223217,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 9762.125743931814,
00:14:14.823        "min_latency_us": 1622.7961904761905,
00:14:14.823        "max_latency_us": 19473.554285714286
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_3",
00:14:14.823        "core_mask": "0x2",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.034922,
00:14:14.823        "iops": 3851.4995862895194,
00:14:14.823        "mibps": 15.044920258943435,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8278.85694169417,
00:14:14.823        "min_latency_us": 1654.0038095238094,
00:14:14.823        "max_latency_us": 20097.706666666665
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_3",
00:14:14.823        "core_mask": "0x4",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.025804,
00:14:14.823        "iops": 3947.6270861338803,
00:14:14.823        "mibps": 15.42041830521047,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8076.292915514593,
00:14:14.823        "min_latency_us": 1638.4,
00:14:14.823        "max_latency_us": 11297.158095238095
00:14:14.823      },
00:14:14.823      {
00:14:14.823        "job": "Malloc_DIF_3",
00:14:14.823        "core_mask": "0x8",
00:14:14.823        "workload": "randrw",
00:14:14.823        "percentage": 50,
00:14:14.823        "status": "finished",
00:14:14.823        "queue_depth": 32,
00:14:14.823        "io_size": 4096,
00:14:14.823        "runtime": 5.017361,
00:14:14.823        "iops": 3922.3807096997803,
00:14:14.823        "mibps": 15.321799647264767,
00:14:14.823        "io_failed": 0,
00:14:14.823        "io_timeout": 0,
00:14:14.823        "avg_latency_us": 8129.271377468061,
00:14:14.823        "min_latency_us": 1654.0038095238094,
00:14:14.823        "max_latency_us": 11172.327619047619
00:14:14.823      }
00:14:14.823    ],
00:14:14.823    "core_count": 4
00:14:14.823  }
00:14:20.094  Process is existed. Pid: 75128
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@636 -- # kill -0 75128
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@637 -- # echo 'Process is existed. Pid: 75128'
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@643 -- # rpc_cmd bdev_malloc_delete Malloc_DIF_1
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@644 -- # rpc_cmd bdev_malloc_delete Malloc_DIF_2
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@645 -- # rpc_cmd bdev_malloc_delete Malloc_DIF_3
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@646 -- # killprocess 75128
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@954 -- # '[' -z 75128 ']'
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@958 -- # kill -0 75128
00:14:20.094    13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@959 -- # uname
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:20.094    13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75128
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:20.094  killing process with pid 75128
00:14:20.094  Received shutdown signal, test time was about 5.000000 seconds
00:14:20.094  
00:14:20.094                                                                                                  Latency(us)
00:14:20.094  
[2024-12-11T13:51:02.866Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:20.094  
[2024-12-11T13:51:02.866Z]  ===================================================================================================================
00:14:20.094  
[2024-12-11T13:51:02.866Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75128'
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@973 -- # kill 75128
00:14:20.094   13:51:02 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@978 -- # wait 75128
00:14:21.032   13:51:03 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@647 -- # trap - SIGINT SIGTERM EXIT
00:14:21.032  
00:14:21.032  real	0m12.522s
00:14:21.032  user	0m47.861s
00:14:21.032  sys	0m0.573s
00:14:21.032   13:51:03 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:21.032   13:51:03 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:14:21.032  ************************************
00:14:21.032  END TEST bdev_dif_insert_strip
00:14:21.032  ************************************
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@832 -- # [[ bdev == gpt ]]
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@836 -- # [[ bdev == crypto_sw ]]
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@849 -- # cleanup
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]]
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]]
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]]
00:14:21.032   13:51:03 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]]
00:14:21.032  ************************************
00:14:21.032  END TEST blockdev_general
00:14:21.032  ************************************
00:14:21.032  
00:14:21.032  real	2m47.765s
00:14:21.032  user	6m53.997s
00:14:21.032  sys	0m28.472s
00:14:21.032   13:51:03 blockdev_general -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:21.032   13:51:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:14:21.032   13:51:03  -- spdk/autotest.sh@181 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:14:21.032   13:51:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:21.032   13:51:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:21.032   13:51:03  -- common/autotest_common.sh@10 -- # set +x
00:14:21.032  ************************************
00:14:21.032  START TEST bdevperf_config
00:14:21.032  ************************************
00:14:21.032   13:51:03 bdevperf_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:14:21.032  * Looking for test storage...
00:14:21.032  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf
00:14:21.032    13:51:03 bdevperf_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:21.032     13:51:03 bdevperf_config -- common/autotest_common.sh@1711 -- # lcov --version
00:14:21.032     13:51:03 bdevperf_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:21.291    13:51:03 bdevperf_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@336 -- # IFS=.-:
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@336 -- # read -ra ver1
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@337 -- # IFS=.-:
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@337 -- # read -ra ver2
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@338 -- # local 'op=<'
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@340 -- # ver1_l=2
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@341 -- # ver2_l=1
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@344 -- # case "$op" in
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@345 -- # : 1
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@365 -- # decimal 1
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@353 -- # local d=1
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@355 -- # echo 1
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@365 -- # ver1[v]=1
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@366 -- # decimal 2
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@353 -- # local d=2
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:21.292     13:51:03 bdevperf_config -- scripts/common.sh@355 -- # echo 2
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@366 -- # ver2[v]=2
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:21.292    13:51:03 bdevperf_config -- scripts/common.sh@368 -- # return 0
00:14:21.292    13:51:03 bdevperf_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:21.292    13:51:03 bdevperf_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:21.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:21.292  		--rc genhtml_branch_coverage=1
00:14:21.292  		--rc genhtml_function_coverage=1
00:14:21.292  		--rc genhtml_legend=1
00:14:21.292  		--rc geninfo_all_blocks=1
00:14:21.292  		--rc geninfo_unexecuted_blocks=1
00:14:21.292  		
00:14:21.292  		'
00:14:21.292    13:51:03 bdevperf_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:21.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:21.292  		--rc genhtml_branch_coverage=1
00:14:21.292  		--rc genhtml_function_coverage=1
00:14:21.292  		--rc genhtml_legend=1
00:14:21.292  		--rc geninfo_all_blocks=1
00:14:21.292  		--rc geninfo_unexecuted_blocks=1
00:14:21.292  		
00:14:21.292  		'
00:14:21.292    13:51:03 bdevperf_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:21.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:21.292  		--rc genhtml_branch_coverage=1
00:14:21.292  		--rc genhtml_function_coverage=1
00:14:21.292  		--rc genhtml_legend=1
00:14:21.292  		--rc geninfo_all_blocks=1
00:14:21.292  		--rc geninfo_unexecuted_blocks=1
00:14:21.292  		
00:14:21.292  		'
00:14:21.292    13:51:03 bdevperf_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:21.292  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:21.292  		--rc genhtml_branch_coverage=1
00:14:21.292  		--rc genhtml_function_coverage=1
00:14:21.292  		--rc genhtml_legend=1
00:14:21.292  		--rc geninfo_all_blocks=1
00:14:21.292  		--rc geninfo_unexecuted_blocks=1
00:14:21.292  		
00:14:21.292  		'
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh
00:14:21.292    13:51:03 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:14:21.292  
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:21.292  
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:21.292  
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:21.292  
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:21.292  
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:21.292   13:51:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:21.292    13:51:03 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:26.564   13:51:08 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-12-11 13:51:03.950136] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:26.564  [2024-12-11 13:51:03.950299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75349 ]
00:14:26.564  Using job config with 4 jobs
00:14:26.564  [2024-12-11 13:51:04.129422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:26.564  [2024-12-11 13:51:04.289368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:26.564  cpumask for '\''job0'\'' is too big
00:14:26.564  cpumask for '\''job1'\'' is too big
00:14:26.564  cpumask for '\''job2'\'' is too big
00:14:26.564  cpumask for '\''job3'\'' is too big
00:14:26.564  Running I/O for 2 seconds...
00:14:26.564     115712.00 IOPS,   113.00 MiB/s
[2024-12-11T13:51:09.336Z]    117760.00 IOPS,   115.00 MiB/s
00:14:26.564                                                                                                  Latency(us)
00:14:26.564  
[2024-12-11T13:51:09.336Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29437.09      28.75       0.00     0.00    8687.50    1817.84   14542.75
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29417.45      28.73       0.00     0.00    8675.48    1700.82   12857.54
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29397.98      28.71       0.00     0.00    8663.83    1778.83   12607.88
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29378.45      28.69       0.00     0.00    8652.71    1700.82   13981.01
00:14:26.564  
[2024-12-11T13:51:09.336Z]  ===================================================================================================================
00:14:26.564  
[2024-12-11T13:51:09.336Z]  Total                       :             117630.96     114.87       0.00     0.00    8669.88    1700.82   14542.75'
00:14:26.564    13:51:08 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-12-11 13:51:03.950136] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:26.564  [2024-12-11 13:51:03.950299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75349 ]
00:14:26.564  Using job config with 4 jobs
00:14:26.564  [2024-12-11 13:51:04.129422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:26.564  [2024-12-11 13:51:04.289368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:26.564  cpumask for '\''job0'\'' is too big
00:14:26.564  cpumask for '\''job1'\'' is too big
00:14:26.564  cpumask for '\''job2'\'' is too big
00:14:26.564  cpumask for '\''job3'\'' is too big
00:14:26.564  Running I/O for 2 seconds...
00:14:26.564     115712.00 IOPS,   113.00 MiB/s
[2024-12-11T13:51:09.336Z]    117760.00 IOPS,   115.00 MiB/s
00:14:26.564                                                                                                  Latency(us)
00:14:26.564  
[2024-12-11T13:51:09.336Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29437.09      28.75       0.00     0.00    8687.50    1817.84   14542.75
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29417.45      28.73       0.00     0.00    8675.48    1700.82   12857.54
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29397.98      28.71       0.00     0.00    8663.83    1778.83   12607.88
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29378.45      28.69       0.00     0.00    8652.71    1700.82   13981.01
00:14:26.564  
[2024-12-11T13:51:09.336Z]  ===================================================================================================================
00:14:26.564  
[2024-12-11T13:51:09.336Z]  Total                       :             117630.96     114.87       0.00     0.00    8669.88    1700.82   14542.75'
00:14:26.564    13:51:08 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-12-11 13:51:03.950136] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:26.564  [2024-12-11 13:51:03.950299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75349 ]
00:14:26.564  Using job config with 4 jobs
00:14:26.564  [2024-12-11 13:51:04.129422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:26.564  [2024-12-11 13:51:04.289368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:26.564  cpumask for '\''job0'\'' is too big
00:14:26.564  cpumask for '\''job1'\'' is too big
00:14:26.564  cpumask for '\''job2'\'' is too big
00:14:26.564  cpumask for '\''job3'\'' is too big
00:14:26.564  Running I/O for 2 seconds...
00:14:26.564     115712.00 IOPS,   113.00 MiB/s
[2024-12-11T13:51:09.336Z]    117760.00 IOPS,   115.00 MiB/s
00:14:26.564                                                                                                  Latency(us)
00:14:26.564  
[2024-12-11T13:51:09.336Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29437.09      28.75       0.00     0.00    8687.50    1817.84   14542.75
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29417.45      28.73       0.00     0.00    8675.48    1700.82   12857.54
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29397.98      28.71       0.00     0.00    8663.83    1778.83   12607.88
00:14:26.564  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:26.564  	 Malloc0             :       2.02   29378.45      28.69       0.00     0.00    8652.71    1700.82   13981.01
00:14:26.564  
[2024-12-11T13:51:09.336Z]  ===================================================================================================================
00:14:26.564  
[2024-12-11T13:51:09.336Z]  Total                       :             117630.96     114.87       0.00     0.00    8669.88    1700.82   14542.75'
00:14:26.564    13:51:08 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:14:26.564    13:51:08 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:14:26.564   13:51:08 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]]
00:14:26.564    13:51:08 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:26.564  [2024-12-11 13:51:08.700091] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:26.564  [2024-12-11 13:51:08.700234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75405 ]
00:14:26.564  [2024-12-11 13:51:08.876686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:26.564  [2024-12-11 13:51:09.036020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:27.133  cpumask for 'job0' is too big
00:14:27.133  cpumask for 'job1' is too big
00:14:27.133  cpumask for 'job2' is too big
00:14:27.133  cpumask for 'job3' is too big
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs
00:14:31.329  Running I/O for 2 seconds...
00:14:31.329     119808.00 IOPS,   117.00 MiB/s
[2024-12-11T13:51:14.101Z]    119808.00 IOPS,   117.00 MiB/s
00:14:31.329                                                                                                  Latency(us)
00:14:31.329  
[2024-12-11T13:51:14.101Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:31.329  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:31.329  	 Malloc0             :       2.02   29884.83      29.18       0.00     0.00    8558.61    1724.22   14105.84
00:14:31.329  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:31.329  	 Malloc0             :       2.02   29863.97      29.16       0.00     0.00    8546.66    1685.21   12295.80
00:14:31.329  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:31.329  	 Malloc0             :       2.02   29844.11      29.14       0.00     0.00    8536.53    1677.41   12857.54
00:14:31.329  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:14:31.329  	 Malloc0             :       2.03   29822.19      29.12       0.00     0.00    8526.05    1685.21   14605.17
00:14:31.329  
[2024-12-11T13:51:14.101Z]  ===================================================================================================================
00:14:31.329  
[2024-12-11T13:51:14.101Z]  Total                       :             119415.09     116.62       0.00     0.00    8541.96    1677.41   14605.17'
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:14:31.329  
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:14:31.329  
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:31.329  
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:31.329   13:51:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:31.329    13:51:13 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:35.531   13:51:18 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-12-11 13:51:13.468339] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:35.531  [2024-12-11 13:51:13.468517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ]
00:14:35.531  Using job config with 3 jobs
00:14:35.531  [2024-12-11 13:51:13.663673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:35.532  [2024-12-11 13:51:13.820246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:35.532  cpumask for '\''job0'\'' is too big
00:14:35.532  cpumask for '\''job1'\'' is too big
00:14:35.532  cpumask for '\''job2'\'' is too big
00:14:35.532  Running I/O for 2 seconds...
00:14:35.532     119808.00 IOPS,   117.00 MiB/s
[2024-12-11T13:51:18.304Z]    121728.00 IOPS,   118.88 MiB/s
00:14:35.532                                                                                                  Latency(us)
00:14:35.532  
[2024-12-11T13:51:18.304Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40485.85      39.54       0.00     0.00    6316.08    1732.02    9799.19
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40458.67      39.51       0.00     0.00    6308.07    1716.42    9299.87
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40431.59      39.48       0.00     0.00    6300.64    1614.99   10797.84
00:14:35.532  
[2024-12-11T13:51:18.304Z]  ===================================================================================================================
00:14:35.532  
[2024-12-11T13:51:18.304Z]  Total                       :             121376.10     118.53       0.00     0.00    6308.26    1614.99   10797.84'
00:14:35.532    13:51:18 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-12-11 13:51:13.468339] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:35.532  [2024-12-11 13:51:13.468517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ]
00:14:35.532  Using job config with 3 jobs
00:14:35.532  [2024-12-11 13:51:13.663673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:35.532  [2024-12-11 13:51:13.820246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:35.532  cpumask for '\''job0'\'' is too big
00:14:35.532  cpumask for '\''job1'\'' is too big
00:14:35.532  cpumask for '\''job2'\'' is too big
00:14:35.532  Running I/O for 2 seconds...
00:14:35.532     119808.00 IOPS,   117.00 MiB/s
[2024-12-11T13:51:18.304Z]    121728.00 IOPS,   118.88 MiB/s
00:14:35.532                                                                                                  Latency(us)
00:14:35.532  
[2024-12-11T13:51:18.304Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40485.85      39.54       0.00     0.00    6316.08    1732.02    9799.19
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40458.67      39.51       0.00     0.00    6308.07    1716.42    9299.87
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40431.59      39.48       0.00     0.00    6300.64    1614.99   10797.84
00:14:35.532  
[2024-12-11T13:51:18.304Z]  ===================================================================================================================
00:14:35.532  
[2024-12-11T13:51:18.304Z]  Total                       :             121376.10     118.53       0.00     0.00    6308.26    1614.99   10797.84'
00:14:35.532    13:51:18 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-12-11 13:51:13.468339] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:35.532  [2024-12-11 13:51:13.468517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ]
00:14:35.532  Using job config with 3 jobs
00:14:35.532  [2024-12-11 13:51:13.663673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:35.532  [2024-12-11 13:51:13.820246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:35.532  cpumask for '\''job0'\'' is too big
00:14:35.532  cpumask for '\''job1'\'' is too big
00:14:35.532  cpumask for '\''job2'\'' is too big
00:14:35.532  Running I/O for 2 seconds...
00:14:35.532     119808.00 IOPS,   117.00 MiB/s
[2024-12-11T13:51:18.304Z]    121728.00 IOPS,   118.88 MiB/s
00:14:35.532                                                                                                  Latency(us)
00:14:35.532  
[2024-12-11T13:51:18.304Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40485.85      39.54       0.00     0.00    6316.08    1732.02    9799.19
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40458.67      39.51       0.00     0.00    6308.07    1716.42    9299.87
00:14:35.532  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:14:35.532  	 Malloc0             :       2.02   40431.59      39.48       0.00     0.00    6300.64    1614.99   10797.84
00:14:35.532  
[2024-12-11T13:51:18.304Z]  ===================================================================================================================
00:14:35.532  
[2024-12-11T13:51:18.304Z]  Total                       :             121376.10     118.53       0.00     0.00    6308.26    1614.99   10797.84'
00:14:35.532    13:51:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:14:35.532    13:51:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]]
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:14:35.532  
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:35.532  
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:35.532  
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:35.532  
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:35.532  
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:14:35.532   13:51:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:14:35.532    13:51:18 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:40.816   13:51:22 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-12-11 13:51:18.287313] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:40.816  [2024-12-11 13:51:18.287459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ]
00:14:40.816  Using job config with 4 jobs
00:14:40.816  [2024-12-11 13:51:18.462587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:40.816  [2024-12-11 13:51:18.615033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:40.816  cpumask for '\''job0'\'' is too big
00:14:40.816  cpumask for '\''job1'\'' is too big
00:14:40.816  cpumask for '\''job2'\'' is too big
00:14:40.816  cpumask for '\''job3'\'' is too big
00:14:40.816  Running I/O for 2 seconds...
00:14:40.816     118784.00 IOPS,   116.00 MiB/s
[2024-12-11T13:51:23.588Z]    117248.00 IOPS,   114.50 MiB/s
00:14:40.816                                                                                                  Latency(us)
00:14:40.816  
[2024-12-11T13:51:23.588Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.02   14419.30      14.08       0.00     0.00   17741.81    3588.88   28711.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.03   14408.76      14.07       0.00     0.00   17738.01    4306.65   28461.35
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.03   14399.26      14.06       0.00     0.00   17690.15    3651.29   24841.26
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.04   14416.76      14.08       0.00     0.00   17652.65    4088.20   24841.26
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.04   14406.79      14.07       0.00     0.00   17610.28    3448.44   21346.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.04   14396.30      14.06       0.00     0.00   17604.08    3978.97   21346.01
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.05   14386.69      14.05       0.00     0.00   17567.19    3510.86   21346.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.05   14376.29      14.04       0.00     0.00   17567.37    3994.58   21346.01
00:14:40.816  
[2024-12-11T13:51:23.588Z]  ===================================================================================================================
00:14:40.816  
[2024-12-11T13:51:23.588Z]  Total                       :             115210.15     112.51       0.00     0.00   17646.19    3448.44   28711.01'
00:14:40.816    13:51:22 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-12-11 13:51:18.287313] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:40.816  [2024-12-11 13:51:18.287459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ]
00:14:40.816  Using job config with 4 jobs
00:14:40.816  [2024-12-11 13:51:18.462587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:40.816  [2024-12-11 13:51:18.615033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:40.816  cpumask for '\''job0'\'' is too big
00:14:40.816  cpumask for '\''job1'\'' is too big
00:14:40.816  cpumask for '\''job2'\'' is too big
00:14:40.816  cpumask for '\''job3'\'' is too big
00:14:40.816  Running I/O for 2 seconds...
00:14:40.816     118784.00 IOPS,   116.00 MiB/s
[2024-12-11T13:51:23.588Z]    117248.00 IOPS,   114.50 MiB/s
00:14:40.816                                                                                                  Latency(us)
00:14:40.816  
[2024-12-11T13:51:23.588Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.02   14419.30      14.08       0.00     0.00   17741.81    3588.88   28711.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.03   14408.76      14.07       0.00     0.00   17738.01    4306.65   28461.35
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.03   14399.26      14.06       0.00     0.00   17690.15    3651.29   24841.26
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.04   14416.76      14.08       0.00     0.00   17652.65    4088.20   24841.26
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.04   14406.79      14.07       0.00     0.00   17610.28    3448.44   21346.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.04   14396.30      14.06       0.00     0.00   17604.08    3978.97   21346.01
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.05   14386.69      14.05       0.00     0.00   17567.19    3510.86   21346.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.05   14376.29      14.04       0.00     0.00   17567.37    3994.58   21346.01
00:14:40.816  
[2024-12-11T13:51:23.588Z]  ===================================================================================================================
00:14:40.816  
[2024-12-11T13:51:23.588Z]  Total                       :             115210.15     112.51       0.00     0.00   17646.19    3448.44   28711.01'
00:14:40.816    13:51:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:14:40.816    13:51:22 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-12-11 13:51:18.287313] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:40.816  [2024-12-11 13:51:18.287459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75529 ]
00:14:40.816  Using job config with 4 jobs
00:14:40.816  [2024-12-11 13:51:18.462587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:40.816  [2024-12-11 13:51:18.615033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:40.816  cpumask for '\''job0'\'' is too big
00:14:40.816  cpumask for '\''job1'\'' is too big
00:14:40.816  cpumask for '\''job2'\'' is too big
00:14:40.816  cpumask for '\''job3'\'' is too big
00:14:40.816  Running I/O for 2 seconds...
00:14:40.816     118784.00 IOPS,   116.00 MiB/s
[2024-12-11T13:51:23.588Z]    117248.00 IOPS,   114.50 MiB/s
00:14:40.816                                                                                                  Latency(us)
00:14:40.816  
[2024-12-11T13:51:23.588Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.02   14419.30      14.08       0.00     0.00   17741.81    3588.88   28711.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.03   14408.76      14.07       0.00     0.00   17738.01    4306.65   28461.35
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.03   14399.26      14.06       0.00     0.00   17690.15    3651.29   24841.26
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.04   14416.76      14.08       0.00     0.00   17652.65    4088.20   24841.26
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.04   14406.79      14.07       0.00     0.00   17610.28    3448.44   21346.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.04   14396.30      14.06       0.00     0.00   17604.08    3978.97   21346.01
00:14:40.816  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc0             :       2.05   14386.69      14.05       0.00     0.00   17567.19    3510.86   21346.01
00:14:40.816  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:14:40.816  	 Malloc1             :       2.05   14376.29      14.04       0.00     0.00   17567.37    3994.58   21346.01
00:14:40.816  
[2024-12-11T13:51:23.588Z]  ===================================================================================================================
00:14:40.816  
[2024-12-11T13:51:23.588Z]  Total                       :             115210.15     112.51       0.00     0.00   17646.19    3448.44   28711.01'
00:14:40.816    13:51:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:14:40.816   13:51:22 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]]
00:14:40.816   13:51:22 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup
00:14:40.816   13:51:22 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:14:40.816   13:51:22 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:14:40.816  ************************************
00:14:40.816  END TEST bdevperf_config
00:14:40.817  ************************************
00:14:40.817  
00:14:40.817  real	0m19.343s
00:14:40.817  user	0m16.983s
00:14:40.817  sys	0m1.891s
00:14:40.817   13:51:22 bdevperf_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:40.817   13:51:22 bdevperf_config -- common/autotest_common.sh@10 -- # set +x
00:14:40.817    13:51:23  -- spdk/autotest.sh@182 -- # uname -s
00:14:40.817   13:51:23  -- spdk/autotest.sh@182 -- # [[ Linux == Linux ]]
00:14:40.817   13:51:23  -- spdk/autotest.sh@183 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:14:40.817   13:51:23  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:40.817   13:51:23  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:40.817   13:51:23  -- common/autotest_common.sh@10 -- # set +x
00:14:40.817  ************************************
00:14:40.817  START TEST reactor_set_interrupt
00:14:40.817  ************************************
00:14:40.817   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:14:40.817  * Looking for test storage...
00:14:40.817  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1711 -- # lcov --version
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:40.817     13:51:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:40.817    13:51:23 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:40.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.817  		--rc genhtml_branch_coverage=1
00:14:40.817  		--rc genhtml_function_coverage=1
00:14:40.817  		--rc genhtml_legend=1
00:14:40.817  		--rc geninfo_all_blocks=1
00:14:40.817  		--rc geninfo_unexecuted_blocks=1
00:14:40.817  		
00:14:40.817  		'
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:40.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.817  		--rc genhtml_branch_coverage=1
00:14:40.817  		--rc genhtml_function_coverage=1
00:14:40.817  		--rc genhtml_legend=1
00:14:40.817  		--rc geninfo_all_blocks=1
00:14:40.817  		--rc geninfo_unexecuted_blocks=1
00:14:40.817  		
00:14:40.817  		'
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:40.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.817  		--rc genhtml_branch_coverage=1
00:14:40.817  		--rc genhtml_function_coverage=1
00:14:40.817  		--rc genhtml_legend=1
00:14:40.817  		--rc geninfo_all_blocks=1
00:14:40.817  		--rc geninfo_unexecuted_blocks=1
00:14:40.817  		
00:14:40.817  		'
00:14:40.817    13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:40.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.817  		--rc genhtml_branch_coverage=1
00:14:40.817  		--rc genhtml_function_coverage=1
00:14:40.817  		--rc genhtml_legend=1
00:14:40.817  		--rc geninfo_all_blocks=1
00:14:40.817  		--rc geninfo_unexecuted_blocks=1
00:14:40.817  		
00:14:40.817  		'
00:14:40.817   13:51:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:14:40.817      13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:14:40.817     13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.817    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.817     13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:14:40.817    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:14:40.817    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:14:40.817     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_CET=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:14:40.817      13:51:23 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_FC=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:14:40.818      13:51:23 reactor_set_interrupt -- common/build_config.sh@90 -- # CONFIG_URING=n
00:14:40.818     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:14:40.818        13:51:23 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:14:40.818       13:51:23 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:14:40.818      13:51:23 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:14:40.818  #define SPDK_CONFIG_H
00:14:40.818  #define SPDK_CONFIG_AIO_FSDEV 1
00:14:40.818  #define SPDK_CONFIG_APPS 1
00:14:40.818  #define SPDK_CONFIG_ARCH native
00:14:40.818  #define SPDK_CONFIG_ASAN 1
00:14:40.818  #undef SPDK_CONFIG_AVAHI
00:14:40.818  #undef SPDK_CONFIG_CET
00:14:40.818  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:14:40.818  #define SPDK_CONFIG_COVERAGE 1
00:14:40.818  #define SPDK_CONFIG_CROSS_PREFIX 
00:14:40.818  #undef SPDK_CONFIG_CRYPTO
00:14:40.818  #undef SPDK_CONFIG_CRYPTO_MLX5
00:14:40.818  #undef SPDK_CONFIG_CUSTOMOCF
00:14:40.818  #undef SPDK_CONFIG_DAOS
00:14:40.818  #define SPDK_CONFIG_DAOS_DIR 
00:14:40.818  #define SPDK_CONFIG_DEBUG 1
00:14:40.818  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:14:40.818  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:14:40.818  #define SPDK_CONFIG_DPDK_INC_DIR 
00:14:40.818  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:14:40.818  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:14:40.818  #undef SPDK_CONFIG_DPDK_UADK
00:14:40.818  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:40.818  #define SPDK_CONFIG_EXAMPLES 1
00:14:40.818  #undef SPDK_CONFIG_FC
00:14:40.818  #define SPDK_CONFIG_FC_PATH 
00:14:40.818  #define SPDK_CONFIG_FIO_PLUGIN 1
00:14:40.818  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:14:40.818  #define SPDK_CONFIG_FSDEV 1
00:14:40.818  #undef SPDK_CONFIG_FUSE
00:14:40.818  #undef SPDK_CONFIG_FUZZER
00:14:40.818  #define SPDK_CONFIG_FUZZER_LIB 
00:14:40.818  #undef SPDK_CONFIG_GOLANG
00:14:40.818  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:14:40.818  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:14:40.818  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:14:40.818  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:14:40.818  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:14:40.818  #undef SPDK_CONFIG_HAVE_LIBBSD
00:14:40.818  #undef SPDK_CONFIG_HAVE_LZ4
00:14:40.818  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:14:40.818  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:14:40.818  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:14:40.818  #define SPDK_CONFIG_IDXD 1
00:14:40.818  #define SPDK_CONFIG_IDXD_KERNEL 1
00:14:40.818  #undef SPDK_CONFIG_IPSEC_MB
00:14:40.818  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:14:40.818  #define SPDK_CONFIG_ISAL 1
00:14:40.818  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:14:40.818  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:14:40.818  #define SPDK_CONFIG_LIBDIR 
00:14:40.818  #undef SPDK_CONFIG_LTO
00:14:40.818  #define SPDK_CONFIG_MAX_LCORES 128
00:14:40.818  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:14:40.818  #define SPDK_CONFIG_NVME_CUSE 1
00:14:40.818  #undef SPDK_CONFIG_OCF
00:14:40.818  #define SPDK_CONFIG_OCF_PATH 
00:14:40.818  #define SPDK_CONFIG_OPENSSL_PATH 
00:14:40.818  #undef SPDK_CONFIG_PGO_CAPTURE
00:14:40.818  #define SPDK_CONFIG_PGO_DIR 
00:14:40.818  #undef SPDK_CONFIG_PGO_USE
00:14:40.818  #define SPDK_CONFIG_PREFIX /usr/local
00:14:40.818  #undef SPDK_CONFIG_RAID5F
00:14:40.818  #undef SPDK_CONFIG_RBD
00:14:40.818  #define SPDK_CONFIG_RDMA 1
00:14:40.818  #define SPDK_CONFIG_RDMA_PROV verbs
00:14:40.818  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:14:40.818  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:14:40.818  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:14:40.818  #undef SPDK_CONFIG_SHARED
00:14:40.818  #undef SPDK_CONFIG_SMA
00:14:40.818  #define SPDK_CONFIG_TESTS 1
00:14:40.818  #undef SPDK_CONFIG_TSAN
00:14:40.818  #define SPDK_CONFIG_UBLK 1
00:14:40.818  #define SPDK_CONFIG_UBSAN 1
00:14:40.818  #define SPDK_CONFIG_UNIT_TESTS 1
00:14:40.818  #undef SPDK_CONFIG_URING
00:14:40.818  #define SPDK_CONFIG_URING_PATH 
00:14:40.818  #undef SPDK_CONFIG_URING_ZNS
00:14:40.818  #undef SPDK_CONFIG_USDT
00:14:40.818  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:14:40.818  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:14:40.818  #undef SPDK_CONFIG_VFIO_USER
00:14:40.818  #define SPDK_CONFIG_VFIO_USER_DIR 
00:14:40.818  #define SPDK_CONFIG_VHOST 1
00:14:40.818  #define SPDK_CONFIG_VIRTIO 1
00:14:40.818  #undef SPDK_CONFIG_VTUNE
00:14:40.818  #define SPDK_CONFIG_VTUNE_DIR 
00:14:40.818  #define SPDK_CONFIG_WERROR 1
00:14:40.818  #define SPDK_CONFIG_WPDK_DIR 
00:14:40.818  #undef SPDK_CONFIG_XNVME
00:14:40.818  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:14:40.819      13:51:23 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:40.819      13:51:23 reactor_set_interrupt -- scripts/common.sh@15 -- # shopt -s extglob
00:14:40.819      13:51:23 reactor_set_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:40.819      13:51:23 reactor_set_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:40.819      13:51:23 reactor_set_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:40.819       13:51:23 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:40.819       13:51:23 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:40.819       13:51:23 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:40.819       13:51:23 reactor_set_interrupt -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:40.819       13:51:23 reactor_set_interrupt -- paths/export.sh@6 -- # export PATH
00:14:40.819       13:51:23 reactor_set_interrupt -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:14:40.819        13:51:23 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:14:40.819       13:51:23 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:14:40.819       13:51:23 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:14:40.819       13:51:23 reactor_set_interrupt -- pm/common@68 -- # uname -s
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]=
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E'
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]]
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:14:40.819      13:51:23 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@70 -- # :
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 1
00:14:40.819     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@126 -- # :
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@140 -- # :
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : true
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@154 -- # :
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@169 -- # :
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@175 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@177 -- # : 0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@206 -- # cat
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export QEMU_BIN=
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@259 -- # QEMU_BIN=
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:14:40.820     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@269 -- # _LCOV=
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@275 -- # lcov_opt=
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@279 -- # export valgrind=
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@279 -- # valgrind=
00:14:40.821      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@285 -- # uname -s
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@289 -- # MAKE=make
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@309 -- # TEST_MODE=
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@331 -- # [[ -z 75614 ]]
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@331 -- # kill -0 75614
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@344 -- # local mount target_dir
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:14:40.821      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.08URvZ
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.08URvZ/tests/interrupt /tmp/spdk.08URvZ
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@340 -- # df -T
00:14:40.821      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1249312768
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254027264
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4714496
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=9658728448
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=19681529856
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=10006024192
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=6265352192
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=6270115840
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda16
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=777306112
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=923156480
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=81207296
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=103000064
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=6395904
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1254010880
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=94617337856
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=5085442048
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:14:40.821  * Looking for test storage...
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@381 -- # local target_space new_size
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:14:40.821      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.821      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@385 -- # mount=/
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@387 -- # target_space=9658728448
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:14:40.821     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]]
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]]
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@394 -- # new_size=12220616704
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.822  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@402 -- # return 0
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1698 -- # set -o errtrace
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1703 -- # true
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1705 -- # xtrace_fd
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:40.822      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1711 -- # lcov --version
00:14:40.822      13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:40.822      13:51:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:40.822     13:51:23 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:40.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.822  		--rc genhtml_branch_coverage=1
00:14:40.822  		--rc genhtml_function_coverage=1
00:14:40.822  		--rc genhtml_legend=1
00:14:40.822  		--rc geninfo_all_blocks=1
00:14:40.822  		--rc geninfo_unexecuted_blocks=1
00:14:40.822  		
00:14:40.822  		'
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:40.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.822  		--rc genhtml_branch_coverage=1
00:14:40.822  		--rc genhtml_function_coverage=1
00:14:40.822  		--rc genhtml_legend=1
00:14:40.822  		--rc geninfo_all_blocks=1
00:14:40.822  		--rc geninfo_unexecuted_blocks=1
00:14:40.822  		
00:14:40.822  		'
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:40.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.822  		--rc genhtml_branch_coverage=1
00:14:40.822  		--rc genhtml_function_coverage=1
00:14:40.822  		--rc genhtml_legend=1
00:14:40.822  		--rc geninfo_all_blocks=1
00:14:40.822  		--rc geninfo_unexecuted_blocks=1
00:14:40.822  		
00:14:40.822  		'
00:14:40.822     13:51:23 reactor_set_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:40.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:40.822  		--rc genhtml_branch_coverage=1
00:14:40.822  		--rc genhtml_function_coverage=1
00:14:40.822  		--rc genhtml_legend=1
00:14:40.822  		--rc geninfo_all_blocks=1
00:14:40.822  		--rc geninfo_unexecuted_blocks=1
00:14:40.822  		
00:14:40.822  		'
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07
00:14:40.822    13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=75682
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:40.822   13:51:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 75682 /var/tmp/spdk.sock
00:14:40.822   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 75682 ']'
00:14:40.822   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:40.822   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:40.822  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:40.822   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:40.822   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:40.822   13:51:23 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:14:40.822  [2024-12-11 13:51:23.572029] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:40.822  [2024-12-11 13:51:23.572208] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75682 ]
00:14:41.082  [2024-12-11 13:51:23.765430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:41.341  [2024-12-11 13:51:23.928676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:41.341  [2024-12-11 13:51:23.928789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:41.341  [2024-12-11 13:51:23.928829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:14:41.601  [2024-12-11 13:51:24.339578] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:14:41.859   13:51:24 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:41.859   13:51:24 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0
00:14:41.859   13:51:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem
00:14:41.859   13:51:24 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:42.428  Malloc0
00:14:42.428  Malloc1
00:14:42.428  Malloc2
00:14:42.428   13:51:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio
00:14:42.428    13:51:24 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s
00:14:42.428   13:51:24 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:14:42.428   13:51:24 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:14:42.428  5000+0 records in
00:14:42.428  5000+0 records out
00:14:42.428  10240000 bytes (10 MB, 9.8 MiB) copied, 0.019845 s, 516 MB/s
00:14:42.429   13:51:24 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:14:42.429  AIO0
00:14:42.687   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 75682
00:14:42.687   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 75682 without_thd
00:14:42.687   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=75682
00:14:42.687   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd
00:14:42.687   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:42.687     13:51:25 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:14:42.687     13:51:25 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1
00:14:42.687   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4
00:14:42.687    13:51:25 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:42.687     13:51:25 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:14:42.687     13:51:25 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:42.947    13:51:25 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo ''
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:14:42.947  spdk_thread ids are 1 on reactor0.
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 75682 0
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75682 0 idle
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:42.947   13:51:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:42.947    13:51:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:14:42.947    13:51:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75682 root      20   0   20.1t 150144  33408 S   9.1   1.2   0:00.89 reactor_0'
00:14:43.206    13:51:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75682 root 20 0 20.1t 150144 33408 S 9.1 1.2 0:00.89 reactor_0
00:14:43.206    13:51:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:43.206    13:51:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=9.1
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=9
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 75682 1
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75682 1 idle
00:14:43.206   13:51:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:43.207   13:51:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:43.207    13:51:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:43.207    13:51:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75685 root      20   0   20.1t 150144  33408 S   0.0   1.2   0:00.00 reactor_1'
00:14:43.466    13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75685 root 20 0 20.1t 150144 33408 S 0.0 1.2 0:00.00 reactor_1
00:14:43.466    13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:43.466    13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 75682 2
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75682 2 idle
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:43.466   13:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:43.466    13:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:43.466    13:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75686 root      20   0   20.1t 150144  33408 S   0.0   1.2   0:00.00 reactor_2'
00:14:43.726    13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:43.726    13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75686 root 20 0 20.1t 150144 33408 S 0.0 1.2 0:00.00 reactor_2
00:14:43.726    13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']'
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}"
00:14:43.726   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2
00:14:43.726  [2024-12-11 13:51:26.506871] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:14:43.986   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:14:43.986  [2024-12-11 13:51:26.718044] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:14:43.986  [2024-12-11 13:51:26.718914] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:43.986   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:14:44.247  [2024-12-11 13:51:26.978196] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:14:44.247  [2024-12-11 13:51:26.979336] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 75682 0
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 75682 0 busy
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:14:44.247   13:51:26 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:44.247   13:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:44.247   13:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:44.247    13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:14:44.247    13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75682 root      20   0   20.1t 153600  33408 R  90.9   1.3   0:01.39 reactor_0'
00:14:44.510    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75682 root 20 0 20.1t 153600 33408 R 90.9 1.3 0:01.39 reactor_0
00:14:44.510    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:44.510    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=90.9
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=90
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 75682 2
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 75682 2 busy
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:44.510   13:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:44.510    13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:14:44.510    13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:44.768   13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75686 root      20   0   20.1t 153600  33408 R  99.9   1.3   0:00.47 reactor_2'
00:14:44.769    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:44.769    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75686 root 20 0 20.1t 153600 33408 R 99.9 1.3 0:00.47 reactor_2
00:14:44.769    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:44.769   13:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:14:45.028  [2024-12-11 13:51:27.658044] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:14:45.028  [2024-12-11 13:51:27.658723] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']'
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 75682 2
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75682 2 idle
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:45.028   13:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:45.028    13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:14:45.028    13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75686 root      20   0   20.1t 153728  33408 S   0.0   1.3   0:00.67 reactor_2'
00:14:45.287    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75686 root 20 0 20.1t 153728 33408 S 0.0 1.3 0:00.67 reactor_2
00:14:45.287    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:45.287    13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:45.287   13:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:14:45.547  [2024-12-11 13:51:28.182112] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:14:45.547  [2024-12-11 13:51:28.183005] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:45.547   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']'
00:14:45.547   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}"
00:14:45.547   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1
00:14:45.806  [2024-12-11 13:51:28.438811] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:14:45.806   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 75682 0
00:14:45.806   13:51:28 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75682 0 idle
00:14:45.806   13:51:28 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75682
00:14:45.806   13:51:28 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:14:45.806   13:51:28 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:45.806   13:51:28 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:45.807   13:51:28 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:45.807   13:51:28 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:45.807   13:51:28 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:45.807   13:51:28 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:45.807   13:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:45.807   13:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:45.807    13:51:28 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75682 -w 256
00:14:45.807    13:51:28 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75682 root      20   0   20.1t 153856  33408 S   0.0   1.3   0:02.36 reactor_0'
00:14:46.066    13:51:28 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75682 root 20 0 20.1t 153856 33408 S 0.0 1.3 0:02.36 reactor_0
00:14:46.066    13:51:28 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:46.066    13:51:28 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT
00:14:46.066   13:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 75682
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 75682 ']'
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 75682
00:14:46.066    13:51:28 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:46.066    13:51:28 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75682
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75682'
00:14:46.066  killing process with pid 75682
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 75682
00:14:46.066   13:51:28 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 75682
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=75836
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:47.973   13:51:30 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 75836 /var/tmp/spdk.sock
00:14:47.973   13:51:30 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 75836 ']'
00:14:47.973   13:51:30 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:47.973   13:51:30 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:47.973   13:51:30 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:47.973  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:47.973   13:51:30 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:47.973   13:51:30 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:14:47.973  [2024-12-11 13:51:30.578759] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:47.973  [2024-12-11 13:51:30.578947] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75836 ]
00:14:48.232  [2024-12-11 13:51:30.773593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:48.232  [2024-12-11 13:51:30.935861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:48.232  [2024-12-11 13:51:30.936013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:48.232  [2024-12-11 13:51:30.936052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:14:48.800  [2024-12-11 13:51:31.358713] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:14:48.800   13:51:31 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:48.800   13:51:31 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0
00:14:48.800   13:51:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem
00:14:48.800   13:51:31 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:49.059  Malloc0
00:14:49.059  Malloc1
00:14:49.059  Malloc2
00:14:49.318   13:51:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio
00:14:49.318    13:51:31 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s
00:14:49.318   13:51:31 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:14:49.318   13:51:31 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:14:49.318  5000+0 records in
00:14:49.318  5000+0 records out
00:14:49.318  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0272423 s, 376 MB/s
00:14:49.318   13:51:31 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:14:49.577  AIO0
00:14:49.577   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 75836
00:14:49.577   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 75836
00:14:49.577   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=75836
00:14:49.577   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=
00:14:49.577   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:14:49.577    13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:14:49.577    13:51:32 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1
00:14:49.577    13:51:32 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:14:49.577    13:51:32 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1
00:14:49.577    13:51:32 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:49.577     13:51:32 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:14:49.577     13:51:32 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:49.836    13:51:32 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1
00:14:49.836   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:14:49.836    13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:14:49.836    13:51:32 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4
00:14:49.836    13:51:32 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:14:49.836    13:51:32 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4
00:14:49.836    13:51:32 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:49.836     13:51:32 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:14:49.836     13:51:32 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:14:50.106    13:51:32 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo ''
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:14:50.106  spdk_thread ids are 1 on reactor0.
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 75836 0
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75836 0 idle
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:50.106    13:51:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:50.106    13:51:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:14:50.106   13:51:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75836 root      20   0   20.1t 150144  33536 S   0.0   1.2   0:00.91 reactor_0'
00:14:50.383    13:51:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75836 root 20 0 20.1t 150144 33536 S 0.0 1.2 0:00.91 reactor_0
00:14:50.383    13:51:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:50.383    13:51:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 75836 1
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75836 1 idle
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:50.383   13:51:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:50.383    13:51:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:50.383    13:51:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:14:50.383   13:51:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75840 root      20   0   20.1t 150144  33536 S   0.0   1.2   0:00.00 reactor_1'
00:14:50.383    13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75840 root 20 0 20.1t 150144 33536 S 0.0 1.2 0:00.00 reactor_1
00:14:50.383    13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:50.383    13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:50.383   13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 75836 2
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75836 2 idle
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:50.384   13:51:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:50.384    13:51:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:50.384    13:51:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75841 root      20   0   20.1t 150144  33536 S   0.0   1.2   0:00.00 reactor_2'
00:14:50.643    13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:50.643    13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75841 root 20 0 20.1t 150144 33536 S 0.0 1.2 0:00.00 reactor_2
00:14:50.643    13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']'
00:14:50.643   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:14:50.902  [2024-12-11 13:51:33.604919] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:14:50.902  [2024-12-11 13:51:33.605185] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode.
00:14:50.902  [2024-12-11 13:51:33.605948] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:50.902   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:14:51.161  [2024-12-11 13:51:33.872978] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:14:51.161  [2024-12-11 13:51:33.873895] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 75836 0
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 75836 0 busy
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:51.161   13:51:33 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:51.161    13:51:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:51.161    13:51:33 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75836 root      20   0   20.1t 153472  33536 R  99.9   1.3   0:01.42 reactor_0'
00:14:51.421    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:51.421    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75836 root 20 0 20.1t 153472 33536 R 99.9 1.3 0:01.42 reactor_0
00:14:51.421    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 75836 2
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 75836 2 busy
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:51.421   13:51:34 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:51.421    13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:51.421    13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75841 root      20   0   20.1t 153472  33536 R  99.9   1.3   0:00.46 reactor_2'
00:14:51.680    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75841 root 20 0 20.1t 153472 33536 R 99.9 1.3 0:00.46 reactor_2
00:14:51.680    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:51.680    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:14:51.680   13:51:34 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:51.681   13:51:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:14:51.940  [2024-12-11 13:51:34.593614] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:14:51.940  [2024-12-11 13:51:34.594450] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']'
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 75836 2
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75836 2 idle
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:51.940   13:51:34 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:51.940    13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:51.940    13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75841 root      20   0   20.1t 153472  33536 S   0.0   1.3   0:00.72 reactor_2'
00:14:52.199    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75841 root 20 0 20.1t 153472 33536 S 0.0 1.3 0:00.72 reactor_2
00:14:52.199    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:52.199    13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:52.199   13:51:34 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:14:52.459  [2024-12-11 13:51:35.029680] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:14:52.459  [2024-12-11 13:51:35.030993] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode.
00:14:52.459  [2024-12-11 13:51:35.031075] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']'
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 75836 0
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 75836 0 idle
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=75836
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:14:52.459   13:51:35 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:14:52.459    13:51:35 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 75836 -w 256
00:14:52.459    13:51:35 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  75836 root      20   0   20.1t 153600  33536 S   0.0   1.3   0:02.34 reactor_0'
00:14:52.718    13:51:35 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 75836 root 20 0 20.1t 153600 33536 S 0.0 1.3 0:02.34 reactor_0
00:14:52.718    13:51:35 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:14:52.718    13:51:35 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:14:52.718   13:51:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 75836
00:14:52.718   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 75836 ']'
00:14:52.718   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 75836
00:14:52.718    13:51:35 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname
00:14:52.718   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:52.718    13:51:35 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75836
00:14:52.718   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:52.718   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:52.718   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75836'
00:14:52.718  killing process with pid 75836
00:14:52.719   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 75836
00:14:52.719   13:51:35 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 75836
00:14:54.622   13:51:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup
00:14:54.622   13:51:37 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:14:54.622  
00:14:54.622  real	0m14.012s
00:14:54.622  user	0m13.712s
00:14:54.622  sys	0m2.491s
00:14:54.622   13:51:37 reactor_set_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:54.622   13:51:37 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:14:54.622  ************************************
00:14:54.622  END TEST reactor_set_interrupt
00:14:54.622  ************************************
00:14:54.622   13:51:37  -- spdk/autotest.sh@184 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:14:54.622   13:51:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:54.622   13:51:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:54.622   13:51:37  -- common/autotest_common.sh@10 -- # set +x
00:14:54.622  ************************************
00:14:54.622  START TEST reap_unregistered_poller
00:14:54.622  ************************************
00:14:54.623   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:14:54.623  * Looking for test storage...
00:14:54.623  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1711 -- # lcov --version
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-:
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-:
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<'
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:54.623     13:51:37 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:54.623    13:51:37 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:54.623  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.623  		--rc genhtml_branch_coverage=1
00:14:54.623  		--rc genhtml_function_coverage=1
00:14:54.623  		--rc genhtml_legend=1
00:14:54.623  		--rc geninfo_all_blocks=1
00:14:54.623  		--rc geninfo_unexecuted_blocks=1
00:14:54.623  		
00:14:54.623  		'
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:54.623  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.623  		--rc genhtml_branch_coverage=1
00:14:54.623  		--rc genhtml_function_coverage=1
00:14:54.623  		--rc genhtml_legend=1
00:14:54.623  		--rc geninfo_all_blocks=1
00:14:54.623  		--rc geninfo_unexecuted_blocks=1
00:14:54.623  		
00:14:54.623  		'
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:54.623  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.623  		--rc genhtml_branch_coverage=1
00:14:54.623  		--rc genhtml_function_coverage=1
00:14:54.623  		--rc genhtml_legend=1
00:14:54.623  		--rc geninfo_all_blocks=1
00:14:54.623  		--rc geninfo_unexecuted_blocks=1
00:14:54.623  		
00:14:54.623  		'
00:14:54.623    13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:54.623  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.623  		--rc genhtml_branch_coverage=1
00:14:54.623  		--rc genhtml_function_coverage=1
00:14:54.623  		--rc genhtml_legend=1
00:14:54.623  		--rc geninfo_all_blocks=1
00:14:54.623  		--rc geninfo_unexecuted_blocks=1
00:14:54.623  		
00:14:54.623  		'
00:14:54.623   13:51:37 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:14:54.623      13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:14:54.623     13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.623    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.623     13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:14:54.623    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:14:54.623    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:14:54.623     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:14:54.623      13:51:37 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_CET=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_FC=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:14:54.624      13:51:37 reap_unregistered_poller -- common/build_config.sh@90 -- # CONFIG_URING=n
00:14:54.624     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:14:54.624        13:51:37 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:14:54.624       13:51:37 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:14:54.624      13:51:37 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:14:54.624  #define SPDK_CONFIG_H
00:14:54.624  #define SPDK_CONFIG_AIO_FSDEV 1
00:14:54.624  #define SPDK_CONFIG_APPS 1
00:14:54.624  #define SPDK_CONFIG_ARCH native
00:14:54.624  #define SPDK_CONFIG_ASAN 1
00:14:54.624  #undef SPDK_CONFIG_AVAHI
00:14:54.624  #undef SPDK_CONFIG_CET
00:14:54.624  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:14:54.624  #define SPDK_CONFIG_COVERAGE 1
00:14:54.624  #define SPDK_CONFIG_CROSS_PREFIX 
00:14:54.624  #undef SPDK_CONFIG_CRYPTO
00:14:54.624  #undef SPDK_CONFIG_CRYPTO_MLX5
00:14:54.624  #undef SPDK_CONFIG_CUSTOMOCF
00:14:54.624  #undef SPDK_CONFIG_DAOS
00:14:54.624  #define SPDK_CONFIG_DAOS_DIR 
00:14:54.624  #define SPDK_CONFIG_DEBUG 1
00:14:54.624  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:14:54.624  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:14:54.624  #define SPDK_CONFIG_DPDK_INC_DIR 
00:14:54.624  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:14:54.624  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:14:54.624  #undef SPDK_CONFIG_DPDK_UADK
00:14:54.624  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:54.624  #define SPDK_CONFIG_EXAMPLES 1
00:14:54.624  #undef SPDK_CONFIG_FC
00:14:54.624  #define SPDK_CONFIG_FC_PATH 
00:14:54.624  #define SPDK_CONFIG_FIO_PLUGIN 1
00:14:54.624  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:14:54.624  #define SPDK_CONFIG_FSDEV 1
00:14:54.624  #undef SPDK_CONFIG_FUSE
00:14:54.624  #undef SPDK_CONFIG_FUZZER
00:14:54.624  #define SPDK_CONFIG_FUZZER_LIB 
00:14:54.624  #undef SPDK_CONFIG_GOLANG
00:14:54.624  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:14:54.624  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:14:54.624  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:14:54.624  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:14:54.624  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:14:54.624  #undef SPDK_CONFIG_HAVE_LIBBSD
00:14:54.624  #undef SPDK_CONFIG_HAVE_LZ4
00:14:54.624  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:14:54.624  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:14:54.624  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:14:54.624  #define SPDK_CONFIG_IDXD 1
00:14:54.624  #define SPDK_CONFIG_IDXD_KERNEL 1
00:14:54.624  #undef SPDK_CONFIG_IPSEC_MB
00:14:54.624  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:14:54.624  #define SPDK_CONFIG_ISAL 1
00:14:54.624  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:14:54.624  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:14:54.624  #define SPDK_CONFIG_LIBDIR 
00:14:54.624  #undef SPDK_CONFIG_LTO
00:14:54.624  #define SPDK_CONFIG_MAX_LCORES 128
00:14:54.624  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:14:54.624  #define SPDK_CONFIG_NVME_CUSE 1
00:14:54.624  #undef SPDK_CONFIG_OCF
00:14:54.625  #define SPDK_CONFIG_OCF_PATH 
00:14:54.625  #define SPDK_CONFIG_OPENSSL_PATH 
00:14:54.625  #undef SPDK_CONFIG_PGO_CAPTURE
00:14:54.625  #define SPDK_CONFIG_PGO_DIR 
00:14:54.625  #undef SPDK_CONFIG_PGO_USE
00:14:54.625  #define SPDK_CONFIG_PREFIX /usr/local
00:14:54.625  #undef SPDK_CONFIG_RAID5F
00:14:54.625  #undef SPDK_CONFIG_RBD
00:14:54.625  #define SPDK_CONFIG_RDMA 1
00:14:54.625  #define SPDK_CONFIG_RDMA_PROV verbs
00:14:54.625  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:14:54.625  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:14:54.625  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:14:54.625  #undef SPDK_CONFIG_SHARED
00:14:54.625  #undef SPDK_CONFIG_SMA
00:14:54.625  #define SPDK_CONFIG_TESTS 1
00:14:54.625  #undef SPDK_CONFIG_TSAN
00:14:54.625  #define SPDK_CONFIG_UBLK 1
00:14:54.625  #define SPDK_CONFIG_UBSAN 1
00:14:54.625  #define SPDK_CONFIG_UNIT_TESTS 1
00:14:54.625  #undef SPDK_CONFIG_URING
00:14:54.625  #define SPDK_CONFIG_URING_PATH 
00:14:54.625  #undef SPDK_CONFIG_URING_ZNS
00:14:54.625  #undef SPDK_CONFIG_USDT
00:14:54.625  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:14:54.625  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:14:54.625  #undef SPDK_CONFIG_VFIO_USER
00:14:54.625  #define SPDK_CONFIG_VFIO_USER_DIR 
00:14:54.625  #define SPDK_CONFIG_VHOST 1
00:14:54.625  #define SPDK_CONFIG_VIRTIO 1
00:14:54.625  #undef SPDK_CONFIG_VTUNE
00:14:54.625  #define SPDK_CONFIG_VTUNE_DIR 
00:14:54.625  #define SPDK_CONFIG_WERROR 1
00:14:54.625  #define SPDK_CONFIG_WPDK_DIR 
00:14:54.625  #undef SPDK_CONFIG_XNVME
00:14:54.625  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:14:54.625      13:51:37 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:14:54.625     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:54.625      13:51:37 reap_unregistered_poller -- scripts/common.sh@15 -- # shopt -s extglob
00:14:54.625      13:51:37 reap_unregistered_poller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:54.625      13:51:37 reap_unregistered_poller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:54.625      13:51:37 reap_unregistered_poller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:54.625       13:51:37 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:54.625       13:51:37 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:54.625       13:51:37 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:54.886       13:51:37 reap_unregistered_poller -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:54.886       13:51:37 reap_unregistered_poller -- paths/export.sh@6 -- # export PATH
00:14:54.886       13:51:37 reap_unregistered_poller -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:54.886     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:14:54.886        13:51:37 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:14:54.887       13:51:37 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:14:54.887       13:51:37 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:14:54.887       13:51:37 reap_unregistered_poller -- pm/common@68 -- # uname -s
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]=
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E'
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]]
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:14:54.887      13:51:37 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@70 -- # :
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 1
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@126 -- # :
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@140 -- # :
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : true
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@154 -- # :
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0
00:14:54.887     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@169 -- # :
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@175 -- # : 0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@177 -- # : 0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@206 -- # cat
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export QEMU_BIN=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@259 -- # QEMU_BIN=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@269 -- # _LCOV=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@275 -- # lcov_opt=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@279 -- # export valgrind=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@279 -- # valgrind=
00:14:54.888      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@285 -- # uname -s
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@289 -- # MAKE=make
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@309 -- # TEST_MODE=
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@331 -- # [[ -z 76013 ]]
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@331 -- # kill -0 76013
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@344 -- # local mount target_dir
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:14:54.888      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ro0fVO
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.ro0fVO/tests/interrupt /tmp/spdk.ro0fVO
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.888      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@340 -- # df -T
00:14:54.888      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1249312768
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254027264
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4714496
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.888     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=9658683392
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=19681529856
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=10006069248
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=6265352192
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=6270115840
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda16
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=777306112
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=923156480
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=81207296
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=103000064
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=6395904
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1254010880
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=94617026560
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=5085753344
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:14:54.889  * Looking for test storage...
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@381 -- # local target_space new_size
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:14:54.889      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.889      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@385 -- # mount=/
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@387 -- # target_space=9658683392
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]]
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]]
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@394 -- # new_size=12220661760
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.889  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@402 -- # return 0
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1698 -- # set -o errtrace
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1703 -- # true
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1705 -- # xtrace_fd
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:54.889      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:54.889      13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1711 -- # lcov --version
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-:
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-:
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<'
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:54.889      13:51:37 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:54.889     13:51:37 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:54.889  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.889  		--rc genhtml_branch_coverage=1
00:14:54.889  		--rc genhtml_function_coverage=1
00:14:54.889  		--rc genhtml_legend=1
00:14:54.889  		--rc geninfo_all_blocks=1
00:14:54.889  		--rc geninfo_unexecuted_blocks=1
00:14:54.889  		
00:14:54.889  		'
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:54.889  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.889  		--rc genhtml_branch_coverage=1
00:14:54.889  		--rc genhtml_function_coverage=1
00:14:54.889  		--rc genhtml_legend=1
00:14:54.889  		--rc geninfo_all_blocks=1
00:14:54.889  		--rc geninfo_unexecuted_blocks=1
00:14:54.889  		
00:14:54.889  		'
00:14:54.889     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:54.889  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.889  		--rc genhtml_branch_coverage=1
00:14:54.889  		--rc genhtml_function_coverage=1
00:14:54.889  		--rc genhtml_legend=1
00:14:54.889  		--rc geninfo_all_blocks=1
00:14:54.890  		--rc geninfo_unexecuted_blocks=1
00:14:54.890  		
00:14:54.890  		'
00:14:54.890     13:51:37 reap_unregistered_poller -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:54.890  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.890  		--rc genhtml_branch_coverage=1
00:14:54.890  		--rc genhtml_function_coverage=1
00:14:54.890  		--rc genhtml_legend=1
00:14:54.890  		--rc geninfo_all_blocks=1
00:14:54.890  		--rc geninfo_unexecuted_blocks=1
00:14:54.890  		
00:14:54.890  		'
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07
00:14:54.890    13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=76070
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:54.890   13:51:37 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 76070 /var/tmp/spdk.sock
00:14:54.890   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@835 -- # '[' -z 76070 ']'
00:14:54.890   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:54.890   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:54.890   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:54.890  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:54.890   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:54.890   13:51:37 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:14:55.149  [2024-12-11 13:51:37.720627] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:14:55.149  [2024-12-11 13:51:37.721070] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76070 ]
00:14:55.149  [2024-12-11 13:51:37.913866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:55.408  [2024-12-11 13:51:38.079792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:55.408  [2024-12-11 13:51:38.079953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:55.408  [2024-12-11 13:51:38.079993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:14:55.976  [2024-12-11 13:51:38.502392] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:14:55.976   13:51:38 reap_unregistered_poller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:55.976   13:51:38 reap_unregistered_poller -- common/autotest_common.sh@868 -- # return 0
00:14:55.976    13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]'
00:14:55.976    13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers
00:14:55.976    13:51:38 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:55.976    13:51:38 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:14:55.976    13:51:38 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{
00:14:55.976    "name": "app_thread",
00:14:55.976    "id": 1,
00:14:55.976    "active_pollers": [],
00:14:55.976    "timed_pollers": [
00:14:55.976      {
00:14:55.976        "name": "rpc_subsystem_poll_servers",
00:14:55.976        "id": 1,
00:14:55.976        "state": "waiting",
00:14:55.976        "run_count": 0,
00:14:55.976        "busy_count": 0,
00:14:55.976        "period_ticks": 8400000
00:14:55.976      }
00:14:55.976    ],
00:14:55.976    "paused_pollers": []
00:14:55.976  }'
00:14:55.976    13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name'
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers=
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' '
00:14:55.976    13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name'
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio
00:14:55.976    13:51:38 reap_unregistered_poller -- interrupt/common.sh@77 -- # uname -s
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:14:55.976  5000+0 records in
00:14:55.976  5000+0 records out
00:14:55.976  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186618 s, 549 MB/s
00:14:55.976   13:51:38 reap_unregistered_poller -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:14:56.236  AIO0
00:14:56.236   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:14:56.494   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1
00:14:56.783    13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers
00:14:56.783    13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]'
00:14:56.783    13:51:39 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.783    13:51:39 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:14:56.783    13:51:39 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.783   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{
00:14:56.783    "name": "app_thread",
00:14:56.783    "id": 1,
00:14:56.783    "active_pollers": [],
00:14:56.783    "timed_pollers": [
00:14:56.783      {
00:14:56.783        "name": "rpc_subsystem_poll_servers",
00:14:56.783        "id": 1,
00:14:56.784        "state": "waiting",
00:14:56.784        "run_count": 0,
00:14:56.784        "busy_count": 0,
00:14:56.784        "period_ticks": 8400000
00:14:56.784      }
00:14:56.784    ],
00:14:56.784    "paused_pollers": []
00:14:56.784  }'
00:14:56.784    13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name'
00:14:56.784   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers=
00:14:56.784   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' '
00:14:56.784    13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name'
00:14:56.784   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers
00:14:56.784   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[  rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]]
00:14:56.784   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:14:56.784   13:51:39 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 76070
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@954 -- # '[' -z 76070 ']'
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@958 -- # kill -0 76070
00:14:56.784    13:51:39 reap_unregistered_poller -- common/autotest_common.sh@959 -- # uname
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:56.784    13:51:39 reap_unregistered_poller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76070
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:56.784  killing process with pid 76070
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76070'
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@973 -- # kill 76070
00:14:56.784   13:51:39 reap_unregistered_poller -- common/autotest_common.sh@978 -- # wait 76070
00:14:58.167   13:51:40 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup
00:14:58.167   13:51:40 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:14:58.167  ************************************
00:14:58.167  END TEST reap_unregistered_poller
00:14:58.167  ************************************
00:14:58.167  
00:14:58.167  real	0m3.709s
00:14:58.167  user	0m2.950s
00:14:58.167  sys	0m0.908s
00:14:58.167   13:51:40 reap_unregistered_poller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:58.167   13:51:40 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:14:58.167   13:51:40  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:14:58.167    13:51:40  -- spdk/autotest.sh@194 -- # uname -s
00:14:58.167   13:51:40  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:14:58.167   13:51:40  -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]]
00:14:58.167   13:51:40  -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]]
00:14:58.167   13:51:40  -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:14:58.167   13:51:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:58.167   13:51:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:58.167   13:51:40  -- common/autotest_common.sh@10 -- # set +x
00:14:58.167  ************************************
00:14:58.167  START TEST spdk_dd
00:14:58.167  ************************************
00:14:58.167   13:51:40 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:14:58.426  * Looking for test storage...
00:14:58.426  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:14:58.426     13:51:41 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:58.426      13:51:41 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:58.426      13:51:41 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version
00:14:58.426     13:51:41 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@336 -- # IFS=.-:
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@337 -- # IFS=.-:
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@338 -- # local 'op=<'
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@344 -- # case "$op" in
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@345 -- # : 1
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@365 -- # decimal 1
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@353 -- # local d=1
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@355 -- # echo 1
00:14:58.426     13:51:41 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@366 -- # decimal 2
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@353 -- # local d=2
00:14:58.426      13:51:41 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:58.427      13:51:41 spdk_dd -- scripts/common.sh@355 -- # echo 2
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@368 -- # return 0
00:14:58.427     13:51:41 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:58.427     13:51:41 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:58.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.427  		--rc genhtml_branch_coverage=1
00:14:58.427  		--rc genhtml_function_coverage=1
00:14:58.427  		--rc genhtml_legend=1
00:14:58.427  		--rc geninfo_all_blocks=1
00:14:58.427  		--rc geninfo_unexecuted_blocks=1
00:14:58.427  		
00:14:58.427  		'
00:14:58.427     13:51:41 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:58.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.427  		--rc genhtml_branch_coverage=1
00:14:58.427  		--rc genhtml_function_coverage=1
00:14:58.427  		--rc genhtml_legend=1
00:14:58.427  		--rc geninfo_all_blocks=1
00:14:58.427  		--rc geninfo_unexecuted_blocks=1
00:14:58.427  		
00:14:58.427  		'
00:14:58.427     13:51:41 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:58.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.427  		--rc genhtml_branch_coverage=1
00:14:58.427  		--rc genhtml_function_coverage=1
00:14:58.427  		--rc genhtml_legend=1
00:14:58.427  		--rc geninfo_all_blocks=1
00:14:58.427  		--rc geninfo_unexecuted_blocks=1
00:14:58.427  		
00:14:58.427  		'
00:14:58.427     13:51:41 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:58.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.427  		--rc genhtml_branch_coverage=1
00:14:58.427  		--rc genhtml_function_coverage=1
00:14:58.427  		--rc genhtml_legend=1
00:14:58.427  		--rc geninfo_all_blocks=1
00:14:58.427  		--rc geninfo_unexecuted_blocks=1
00:14:58.427  		
00:14:58.427  		'
00:14:58.427    13:51:41 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:58.427     13:51:41 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:58.427      13:51:41 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:58.427      13:51:41 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:58.427      13:51:41 spdk_dd -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:58.427      13:51:41 spdk_dd -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:58.427      13:51:41 spdk_dd -- paths/export.sh@6 -- # export PATH
00:14:58.427      13:51:41 spdk_dd -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:58.427   13:51:41 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:58.994  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:58.994  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:14:59.561   13:51:42 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace))
00:14:59.561    13:51:42 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace
00:14:59.561    13:51:42 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs
00:14:59.561    13:51:42 spdk_dd -- scripts/common.sh@313 -- # local nvmes
00:14:59.561    13:51:42 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]]
00:14:59.561    13:51:42 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:14:59.561     13:51:42 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:14:59.561     13:51:42 spdk_dd -- scripts/common.sh@298 -- # local bdf=
00:14:59.561      13:51:42 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:14:59.561      13:51:42 spdk_dd -- scripts/common.sh@233 -- # local class
00:14:59.561      13:51:42 spdk_dd -- scripts/common.sh@234 -- # local subclass
00:14:59.561      13:51:42 spdk_dd -- scripts/common.sh@235 -- # local progif
00:14:59.561       13:51:42 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1
00:14:59.561      13:51:42 spdk_dd -- scripts/common.sh@236 -- # class=01
00:14:59.821       13:51:42 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@237 -- # subclass=08
00:14:59.821       13:51:42 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@238 -- # progif=02
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@240 -- # hash lspci
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02
00:14:59.821      13:51:42 spdk_dd -- scripts/common.sh@245 -- # tr -d '"'
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@18 -- # local i
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]]
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@27 -- # return 0
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:14:59.821    13:51:42 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:14:59.821    13:51:42 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:14:59.821     13:51:42 spdk_dd -- scripts/common.sh@323 -- # uname -s
00:14:59.821    13:51:42 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:14:59.821    13:51:42 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:14:59.821    13:51:42 spdk_dd -- scripts/common.sh@328 -- # (( 1 ))
00:14:59.821    13:51:42 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0
00:14:59.821   13:51:42 spdk_dd -- dd/dd.sh@13 -- # check_liburing
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@139 -- # local lib
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:14:59.821    13:51:42 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:14:59.821    13:51:42 spdk_dd -- dd/common.sh@137 -- # grep NEEDED
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]]
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]]
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]]
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]]
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]]
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n'
00:14:59.821  * spdk_dd linked to liburing
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:14:59.821   13:51:42 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:14:59.821    13:51:42 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:14:59.822    13:51:42 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=n
00:14:59.822   13:51:42 spdk_dd -- dd/common.sh@149 -- # [[ n != y ]]
00:14:59.822   13:51:42 spdk_dd -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n'
00:14:59.822  * spdk_dd built with liburing, but no liburing support requested?
00:14:59.822   13:51:42 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1
00:14:59.822   13:51:42 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1
00:14:59.822   13:51:42 spdk_dd -- dd/common.sh@153 -- # return 0
00:14:59.822   13:51:42 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 ))
00:14:59.822   13:51:42 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0
00:14:59.822   13:51:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:59.822   13:51:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:59.822   13:51:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:14:59.822  ************************************
00:14:59.822  START TEST spdk_dd_basic_rw
00:14:59.822  ************************************
00:14:59.822   13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0
00:14:59.822  * Looking for test storage...
00:14:59.822  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:14:59.822     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:59.822      13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version
00:14:59.822      13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-:
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-:
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<'
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:00.081  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:00.081  		--rc genhtml_branch_coverage=1
00:15:00.081  		--rc genhtml_function_coverage=1
00:15:00.081  		--rc genhtml_legend=1
00:15:00.081  		--rc geninfo_all_blocks=1
00:15:00.081  		--rc geninfo_unexecuted_blocks=1
00:15:00.081  		
00:15:00.081  		'
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:00.081  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:00.081  		--rc genhtml_branch_coverage=1
00:15:00.081  		--rc genhtml_function_coverage=1
00:15:00.081  		--rc genhtml_legend=1
00:15:00.081  		--rc geninfo_all_blocks=1
00:15:00.081  		--rc geninfo_unexecuted_blocks=1
00:15:00.081  		
00:15:00.081  		'
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:00.081  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:00.081  		--rc genhtml_branch_coverage=1
00:15:00.081  		--rc genhtml_function_coverage=1
00:15:00.081  		--rc genhtml_legend=1
00:15:00.081  		--rc geninfo_all_blocks=1
00:15:00.081  		--rc geninfo_unexecuted_blocks=1
00:15:00.081  		
00:15:00.081  		'
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:00.081  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:00.081  		--rc genhtml_branch_coverage=1
00:15:00.081  		--rc genhtml_function_coverage=1
00:15:00.081  		--rc genhtml_legend=1
00:15:00.081  		--rc geninfo_all_blocks=1
00:15:00.081  		--rc geninfo_unexecuted_blocks=1
00:15:00.081  		
00:15:00.081  		'
00:15:00.081    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:00.081     13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # export PATH
00:15:00.081      13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@")
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie')
00:15:00.081   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0
00:15:00.082   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:15:00.082   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:00.082    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0
00:15:00.082    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id
00:15:00.082    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id
00:15:00.082     13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0'
00:15:00.343    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             25 Data Units Written:          3 Host Read Commands:          626 Host Write Commands:         19 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask:               0 Protection Information Capabilities:   16b Guard Protection Information Storage Tag Support:  No   16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0   Storage Tag Check Read Support:                        No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]]
00:15:00.343    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             25 Data Units Written:          3 Host Read Commands:          626 Host Write Commands:         19 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask:               0 Protection Information Capabilities:   16b Guard Protection Information Storage Tag Support:  No   16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0   Storage Tag Check Read Support:                        No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]]
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # :
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:15:00.344  ************************************
00:15:00.344  START TEST dd_bs_lt_native_bs
00:15:00.344  ************************************
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:00.344    13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:15:00.344   13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:15:00.344  {
00:15:00.344    "subsystems": [
00:15:00.344      {
00:15:00.344        "subsystem": "bdev",
00:15:00.344        "config": [
00:15:00.344          {
00:15:00.344            "params": {
00:15:00.344              "trtype": "pcie",
00:15:00.344              "traddr": "0000:00:10.0",
00:15:00.344              "name": "Nvme0"
00:15:00.344            },
00:15:00.344            "method": "bdev_nvme_attach_controller"
00:15:00.344          },
00:15:00.344          {
00:15:00.344            "method": "bdev_wait_for_examine"
00:15:00.344          }
00:15:00.344        ]
00:15:00.344      }
00:15:00.344    ]
00:15:00.344  }
00:15:00.344  [2024-12-11 13:51:43.082735] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:00.344  [2024-12-11 13:51:43.082927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ]
00:15:00.603  [2024-12-11 13:51:43.286973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:00.862  [2024-12-11 13:51:43.490348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:01.430  [2024-12-11 13:51:43.990832] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size
00:15:01.430  [2024-12-11 13:51:43.990929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:15:01.998  [2024-12-11 13:51:44.713924] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:02.257  
00:15:02.257  real	0m2.027s
00:15:02.257  user	0m1.583s
00:15:02.257  sys	0m0.372s
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:02.257   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x
00:15:02.257  ************************************
00:15:02.257  END TEST dd_bs_lt_native_bs
00:15:02.257  ************************************
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:15:02.520  ************************************
00:15:02.520  START TEST dd_rw
00:15:02.520  ************************************
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64)
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:15:02.520   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:03.118   13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62
00:15:03.118    13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:15:03.118    13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:03.118    13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:03.118  {
00:15:03.118    "subsystems": [
00:15:03.118      {
00:15:03.118        "subsystem": "bdev",
00:15:03.118        "config": [
00:15:03.118          {
00:15:03.118            "params": {
00:15:03.118              "trtype": "pcie",
00:15:03.118              "traddr": "0000:00:10.0",
00:15:03.118              "name": "Nvme0"
00:15:03.118            },
00:15:03.118            "method": "bdev_nvme_attach_controller"
00:15:03.118          },
00:15:03.118          {
00:15:03.118            "method": "bdev_wait_for_examine"
00:15:03.118          }
00:15:03.118        ]
00:15:03.118      }
00:15:03.118    ]
00:15:03.118  }
00:15:03.118  [2024-12-11 13:51:45.754484] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:03.118  [2024-12-11 13:51:45.754701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ]
00:15:03.378  [2024-12-11 13:51:45.947915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:03.378  [2024-12-11 13:51:46.099881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:03.946  
[2024-12-11T13:51:48.096Z] Copying: 60/60 [kB] (average 14 MBps)
00:15:05.324  
00:15:05.324   13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62
00:15:05.324    13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:15:05.324    13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:05.324    13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:05.324  {
00:15:05.324    "subsystems": [
00:15:05.324      {
00:15:05.324        "subsystem": "bdev",
00:15:05.324        "config": [
00:15:05.324          {
00:15:05.324            "params": {
00:15:05.324              "trtype": "pcie",
00:15:05.324              "traddr": "0000:00:10.0",
00:15:05.324              "name": "Nvme0"
00:15:05.324            },
00:15:05.324            "method": "bdev_nvme_attach_controller"
00:15:05.324          },
00:15:05.324          {
00:15:05.324            "method": "bdev_wait_for_examine"
00:15:05.324          }
00:15:05.324        ]
00:15:05.324      }
00:15:05.324    ]
00:15:05.324  }
00:15:05.324  [2024-12-11 13:51:48.029310] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:05.324  [2024-12-11 13:51:48.029526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76452 ]
00:15:05.583  [2024-12-11 13:51:48.220781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:05.842  [2024-12-11 13:51:48.372986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:06.102  
[2024-12-11T13:51:50.253Z] Copying: 60/60 [kB] (average 19 MBps)
00:15:07.481  
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:15:07.481   13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:07.481    13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:15:07.481    13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:07.481    13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:07.481  {
00:15:07.481    "subsystems": [
00:15:07.481      {
00:15:07.481        "subsystem": "bdev",
00:15:07.481        "config": [
00:15:07.481          {
00:15:07.481            "params": {
00:15:07.481              "trtype": "pcie",
00:15:07.481              "traddr": "0000:00:10.0",
00:15:07.481              "name": "Nvme0"
00:15:07.481            },
00:15:07.481            "method": "bdev_nvme_attach_controller"
00:15:07.481          },
00:15:07.481          {
00:15:07.481            "method": "bdev_wait_for_examine"
00:15:07.481          }
00:15:07.481        ]
00:15:07.481      }
00:15:07.481    ]
00:15:07.481  }
00:15:07.481  [2024-12-11 13:51:49.993847] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:07.481  [2024-12-11 13:51:49.994013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76483 ]
00:15:07.481  [2024-12-11 13:51:50.176183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:07.740  [2024-12-11 13:51:50.332291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:08.308  
[2024-12-11T13:51:52.510Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:15:09.738  
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:15:09.738   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:09.997   13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62
00:15:09.997    13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:15:09.997    13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:09.997    13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:09.997  {
00:15:09.997    "subsystems": [
00:15:09.997      {
00:15:09.997        "subsystem": "bdev",
00:15:09.997        "config": [
00:15:09.997          {
00:15:09.997            "params": {
00:15:09.997              "trtype": "pcie",
00:15:09.997              "traddr": "0000:00:10.0",
00:15:09.997              "name": "Nvme0"
00:15:09.997            },
00:15:09.997            "method": "bdev_nvme_attach_controller"
00:15:09.997          },
00:15:09.997          {
00:15:09.997            "method": "bdev_wait_for_examine"
00:15:09.997          }
00:15:09.997        ]
00:15:09.997      }
00:15:09.997    ]
00:15:09.997  }
00:15:09.997  [2024-12-11 13:51:52.711917] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:09.997  [2024-12-11 13:51:52.712063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76519 ]
00:15:10.256  [2024-12-11 13:51:52.895317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:10.515  [2024-12-11 13:51:53.087612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:11.083  
[2024-12-11T13:51:54.790Z] Copying: 60/60 [kB] (average 58 MBps)
00:15:12.018  
00:15:12.018   13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62
00:15:12.018    13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:15:12.018    13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:12.018    13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:12.018  {
00:15:12.018    "subsystems": [
00:15:12.018      {
00:15:12.018        "subsystem": "bdev",
00:15:12.018        "config": [
00:15:12.018          {
00:15:12.018            "params": {
00:15:12.018              "trtype": "pcie",
00:15:12.018              "traddr": "0000:00:10.0",
00:15:12.018              "name": "Nvme0"
00:15:12.018            },
00:15:12.018            "method": "bdev_nvme_attach_controller"
00:15:12.018          },
00:15:12.018          {
00:15:12.018            "method": "bdev_wait_for_examine"
00:15:12.018          }
00:15:12.018        ]
00:15:12.018      }
00:15:12.018    ]
00:15:12.018  }
00:15:12.018  [2024-12-11 13:51:54.679573] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:12.018  [2024-12-11 13:51:54.679797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76549 ]
00:15:12.277  [2024-12-11 13:51:54.877533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:12.277  [2024-12-11 13:51:55.033092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:12.843  
[2024-12-11T13:51:56.992Z] Copying: 60/60 [kB] (average 58 MBps)
00:15:14.220  
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:15:14.220   13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:14.220    13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:15:14.220    13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:14.220    13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:14.220  {
00:15:14.220    "subsystems": [
00:15:14.220      {
00:15:14.220        "subsystem": "bdev",
00:15:14.220        "config": [
00:15:14.220          {
00:15:14.220            "params": {
00:15:14.220              "trtype": "pcie",
00:15:14.220              "traddr": "0000:00:10.0",
00:15:14.220              "name": "Nvme0"
00:15:14.220            },
00:15:14.220            "method": "bdev_nvme_attach_controller"
00:15:14.220          },
00:15:14.220          {
00:15:14.220            "method": "bdev_wait_for_examine"
00:15:14.220          }
00:15:14.220        ]
00:15:14.220      }
00:15:14.220    ]
00:15:14.220  }
00:15:14.220  [2024-12-11 13:51:56.918577] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:14.220  [2024-12-11 13:51:56.919002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76580 ]
00:15:14.479  [2024-12-11 13:51:57.147661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:14.737  [2024-12-11 13:51:57.340889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:15.305  
[2024-12-11T13:51:59.047Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:15:16.275  
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:15:16.275   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:16.844   13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62
00:15:16.844    13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:15:16.844    13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:16.844    13:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:16.844  {
00:15:16.844    "subsystems": [
00:15:16.844      {
00:15:16.844        "subsystem": "bdev",
00:15:16.844        "config": [
00:15:16.844          {
00:15:16.844            "params": {
00:15:16.844              "trtype": "pcie",
00:15:16.844              "traddr": "0000:00:10.0",
00:15:16.844              "name": "Nvme0"
00:15:16.844            },
00:15:16.844            "method": "bdev_nvme_attach_controller"
00:15:16.844          },
00:15:16.844          {
00:15:16.844            "method": "bdev_wait_for_examine"
00:15:16.844          }
00:15:16.844        ]
00:15:16.844      }
00:15:16.844    ]
00:15:16.844  }
00:15:17.103  [2024-12-11 13:51:59.654340] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:17.103  [2024-12-11 13:51:59.654545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76616 ]
00:15:17.103  [2024-12-11 13:51:59.852539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:17.362  [2024-12-11 13:52:00.025643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:17.930  
[2024-12-11T13:52:02.079Z] Copying: 56/56 [kB] (average 54 MBps)
00:15:19.307  
00:15:19.307   13:52:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62
00:15:19.307    13:52:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:15:19.307    13:52:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:19.307    13:52:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:19.307  {
00:15:19.307    "subsystems": [
00:15:19.307      {
00:15:19.307        "subsystem": "bdev",
00:15:19.307        "config": [
00:15:19.307          {
00:15:19.307            "params": {
00:15:19.307              "trtype": "pcie",
00:15:19.307              "traddr": "0000:00:10.0",
00:15:19.307              "name": "Nvme0"
00:15:19.307            },
00:15:19.307            "method": "bdev_nvme_attach_controller"
00:15:19.307          },
00:15:19.307          {
00:15:19.307            "method": "bdev_wait_for_examine"
00:15:19.307          }
00:15:19.307        ]
00:15:19.307      }
00:15:19.307    ]
00:15:19.307  }
00:15:19.565  [2024-12-11 13:52:02.148022] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:19.565  [2024-12-11 13:52:02.148220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76646 ]
00:15:19.565  [2024-12-11 13:52:02.342116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:19.824  [2024-12-11 13:52:02.519674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:20.392  
[2024-12-11T13:52:04.541Z] Copying: 56/56 [kB] (average 18 MBps)
00:15:21.769  
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:15:21.769   13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:21.769    13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:15:21.769    13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:21.769    13:52:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:21.769  {
00:15:21.769    "subsystems": [
00:15:21.769      {
00:15:21.769        "subsystem": "bdev",
00:15:21.769        "config": [
00:15:21.769          {
00:15:21.769            "params": {
00:15:21.769              "trtype": "pcie",
00:15:21.769              "traddr": "0000:00:10.0",
00:15:21.769              "name": "Nvme0"
00:15:21.769            },
00:15:21.769            "method": "bdev_nvme_attach_controller"
00:15:21.769          },
00:15:21.769          {
00:15:21.769            "method": "bdev_wait_for_examine"
00:15:21.769          }
00:15:21.769        ]
00:15:21.769      }
00:15:21.769    ]
00:15:21.769  }
00:15:21.769  [2024-12-11 13:52:04.362816] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:21.769  [2024-12-11 13:52:04.363356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76677 ]
00:15:22.028  [2024-12-11 13:52:04.565831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:22.028  [2024-12-11 13:52:04.770883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:22.600  
[2024-12-11T13:52:06.755Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:15:23.983  
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:15:23.983   13:52:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:24.551   13:52:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62
00:15:24.551    13:52:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:15:24.551    13:52:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:24.551    13:52:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:24.551  {
00:15:24.551    "subsystems": [
00:15:24.551      {
00:15:24.551        "subsystem": "bdev",
00:15:24.551        "config": [
00:15:24.551          {
00:15:24.551            "params": {
00:15:24.551              "trtype": "pcie",
00:15:24.551              "traddr": "0000:00:10.0",
00:15:24.551              "name": "Nvme0"
00:15:24.551            },
00:15:24.551            "method": "bdev_nvme_attach_controller"
00:15:24.551          },
00:15:24.551          {
00:15:24.551            "method": "bdev_wait_for_examine"
00:15:24.551          }
00:15:24.551        ]
00:15:24.551      }
00:15:24.551    ]
00:15:24.551  }
00:15:24.551  [2024-12-11 13:52:07.126980] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:24.551  [2024-12-11 13:52:07.127478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76718 ]
00:15:24.551  [2024-12-11 13:52:07.323672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:24.810  [2024-12-11 13:52:07.450703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:25.376  
[2024-12-11T13:52:09.082Z] Copying: 56/56 [kB] (average 54 MBps)
00:15:26.310  
00:15:26.310   13:52:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62
00:15:26.310    13:52:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:15:26.310    13:52:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:26.310    13:52:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:26.310  {
00:15:26.310    "subsystems": [
00:15:26.310      {
00:15:26.310        "subsystem": "bdev",
00:15:26.310        "config": [
00:15:26.310          {
00:15:26.310            "params": {
00:15:26.310              "trtype": "pcie",
00:15:26.310              "traddr": "0000:00:10.0",
00:15:26.310              "name": "Nvme0"
00:15:26.310            },
00:15:26.310            "method": "bdev_nvme_attach_controller"
00:15:26.310          },
00:15:26.310          {
00:15:26.310            "method": "bdev_wait_for_examine"
00:15:26.310          }
00:15:26.310        ]
00:15:26.310      }
00:15:26.310    ]
00:15:26.310  }
00:15:26.310  [2024-12-11 13:52:08.912091] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:26.310  [2024-12-11 13:52:08.912285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76743 ]
00:15:26.568  [2024-12-11 13:52:09.108117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:26.568  [2024-12-11 13:52:09.239729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:27.135  
[2024-12-11T13:52:10.843Z] Copying: 56/56 [kB] (average 54 MBps)
00:15:28.071  
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:15:28.356   13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:28.356    13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:15:28.356    13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:28.356    13:52:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:28.356  {
00:15:28.356    "subsystems": [
00:15:28.356      {
00:15:28.356        "subsystem": "bdev",
00:15:28.356        "config": [
00:15:28.356          {
00:15:28.356            "params": {
00:15:28.356              "trtype": "pcie",
00:15:28.356              "traddr": "0000:00:10.0",
00:15:28.356              "name": "Nvme0"
00:15:28.356            },
00:15:28.356            "method": "bdev_nvme_attach_controller"
00:15:28.356          },
00:15:28.356          {
00:15:28.356            "method": "bdev_wait_for_examine"
00:15:28.356          }
00:15:28.356        ]
00:15:28.356      }
00:15:28.356    ]
00:15:28.356  }
00:15:28.356  [2024-12-11 13:52:10.945735] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:28.357  [2024-12-11 13:52:10.946245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76774 ]
00:15:28.622  [2024-12-11 13:52:11.141577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:28.622  [2024-12-11 13:52:11.269521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:29.191  
[2024-12-11T13:52:12.899Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:15:30.127  
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:15:30.127   13:52:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:30.386   13:52:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62
00:15:30.386    13:52:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:15:30.386    13:52:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:30.386    13:52:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:30.386  {
00:15:30.386    "subsystems": [
00:15:30.386      {
00:15:30.386        "subsystem": "bdev",
00:15:30.386        "config": [
00:15:30.386          {
00:15:30.386            "params": {
00:15:30.386              "trtype": "pcie",
00:15:30.386              "traddr": "0000:00:10.0",
00:15:30.386              "name": "Nvme0"
00:15:30.386            },
00:15:30.386            "method": "bdev_nvme_attach_controller"
00:15:30.386          },
00:15:30.386          {
00:15:30.386            "method": "bdev_wait_for_examine"
00:15:30.386          }
00:15:30.386        ]
00:15:30.386      }
00:15:30.386    ]
00:15:30.386  }
00:15:30.644  [2024-12-11 13:52:13.229837] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:30.644  [2024-12-11 13:52:13.230036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76804 ]
00:15:30.644  [2024-12-11 13:52:13.422267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:30.902  [2024-12-11 13:52:13.552570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:31.470  
[2024-12-11T13:52:15.179Z] Copying: 48/48 [kB] (average 46 MBps)
00:15:32.408  
00:15:32.666   13:52:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62
00:15:32.666    13:52:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:15:32.666    13:52:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:32.666    13:52:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:32.666  {
00:15:32.666    "subsystems": [
00:15:32.666      {
00:15:32.666        "subsystem": "bdev",
00:15:32.666        "config": [
00:15:32.666          {
00:15:32.666            "params": {
00:15:32.666              "trtype": "pcie",
00:15:32.666              "traddr": "0000:00:10.0",
00:15:32.666              "name": "Nvme0"
00:15:32.666            },
00:15:32.666            "method": "bdev_nvme_attach_controller"
00:15:32.666          },
00:15:32.666          {
00:15:32.666            "method": "bdev_wait_for_examine"
00:15:32.666          }
00:15:32.666        ]
00:15:32.666      }
00:15:32.666    ]
00:15:32.666  }
00:15:32.666  [2024-12-11 13:52:15.265268] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:32.666  [2024-12-11 13:52:15.265493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76834 ]
00:15:32.666  [2024-12-11 13:52:15.450325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:32.924  [2024-12-11 13:52:15.576808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:33.490  
[2024-12-11T13:52:17.199Z] Copying: 48/48 [kB] (average 46 MBps)
00:15:34.427  
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:15:34.427   13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:34.427    13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:15:34.427    13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:34.427    13:52:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:34.427  {
00:15:34.427    "subsystems": [
00:15:34.427      {
00:15:34.427        "subsystem": "bdev",
00:15:34.427        "config": [
00:15:34.427          {
00:15:34.427            "params": {
00:15:34.427              "trtype": "pcie",
00:15:34.427              "traddr": "0000:00:10.0",
00:15:34.427              "name": "Nvme0"
00:15:34.427            },
00:15:34.427            "method": "bdev_nvme_attach_controller"
00:15:34.427          },
00:15:34.427          {
00:15:34.427            "method": "bdev_wait_for_examine"
00:15:34.427          }
00:15:34.427        ]
00:15:34.427      }
00:15:34.427    ]
00:15:34.427  }
00:15:34.427  [2024-12-11 13:52:17.027863] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:34.427  [2024-12-11 13:52:17.028122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76860 ]
00:15:34.685  [2024-12-11 13:52:17.220716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:34.685  [2024-12-11 13:52:17.351673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:35.252  
[2024-12-11T13:52:18.958Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:15:36.186  
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:15:36.445   13:52:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:36.703   13:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62
00:15:36.703    13:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:15:36.703    13:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:36.703    13:52:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:36.703  {
00:15:36.703    "subsystems": [
00:15:36.703      {
00:15:36.703        "subsystem": "bdev",
00:15:36.703        "config": [
00:15:36.703          {
00:15:36.703            "params": {
00:15:36.703              "trtype": "pcie",
00:15:36.703              "traddr": "0000:00:10.0",
00:15:36.703              "name": "Nvme0"
00:15:36.703            },
00:15:36.703            "method": "bdev_nvme_attach_controller"
00:15:36.703          },
00:15:36.703          {
00:15:36.703            "method": "bdev_wait_for_examine"
00:15:36.703          }
00:15:36.703        ]
00:15:36.703      }
00:15:36.703    ]
00:15:36.703  }
00:15:36.963  [2024-12-11 13:52:19.510746] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:36.963  [2024-12-11 13:52:19.510895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76894 ]
00:15:36.963  [2024-12-11 13:52:19.684907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:37.221  [2024-12-11 13:52:19.812666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:37.480  
[2024-12-11T13:52:21.189Z] Copying: 48/48 [kB] (average 46 MBps)
00:15:38.417  
00:15:38.417   13:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62
00:15:38.417    13:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:15:38.417    13:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:38.418    13:52:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:38.418  {
00:15:38.418    "subsystems": [
00:15:38.418      {
00:15:38.418        "subsystem": "bdev",
00:15:38.418        "config": [
00:15:38.418          {
00:15:38.418            "params": {
00:15:38.418              "trtype": "pcie",
00:15:38.418              "traddr": "0000:00:10.0",
00:15:38.418              "name": "Nvme0"
00:15:38.418            },
00:15:38.418            "method": "bdev_nvme_attach_controller"
00:15:38.418          },
00:15:38.418          {
00:15:38.418            "method": "bdev_wait_for_examine"
00:15:38.418          }
00:15:38.418        ]
00:15:38.418      }
00:15:38.418    ]
00:15:38.418  }
00:15:38.677  [2024-12-11 13:52:21.244228] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:38.677  [2024-12-11 13:52:21.244381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76920 ]
00:15:38.677  [2024-12-11 13:52:21.414312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:38.936  [2024-12-11 13:52:21.541714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:39.505  
[2024-12-11T13:52:23.216Z] Copying: 48/48 [kB] (average 46 MBps)
00:15:40.444  
00:15:40.444   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:40.444   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:15:40.444   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:40.444   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:40.445   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152
00:15:40.445   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:40.445   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:15:40.445   13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:40.445    13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:15:40.445    13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:40.445    13:52:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:40.445  {
00:15:40.445    "subsystems": [
00:15:40.445      {
00:15:40.445        "subsystem": "bdev",
00:15:40.445        "config": [
00:15:40.445          {
00:15:40.445            "params": {
00:15:40.445              "trtype": "pcie",
00:15:40.445              "traddr": "0000:00:10.0",
00:15:40.445              "name": "Nvme0"
00:15:40.445            },
00:15:40.445            "method": "bdev_nvme_attach_controller"
00:15:40.445          },
00:15:40.445          {
00:15:40.445            "method": "bdev_wait_for_examine"
00:15:40.445          }
00:15:40.445        ]
00:15:40.445      }
00:15:40.445    ]
00:15:40.445  }
00:15:40.445  [2024-12-11 13:52:23.207381] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:40.445  [2024-12-11 13:52:23.207550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76951 ]
00:15:40.704  [2024-12-11 13:52:23.378876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:40.963  [2024-12-11 13:52:23.508160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:41.221  
[2024-12-11T13:52:24.947Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:15:42.175  
00:15:42.175  ************************************
00:15:42.175  END TEST dd_rw
00:15:42.175  ************************************
00:15:42.175  
00:15:42.175  real	0m39.794s
00:15:42.176  user	0m31.901s
00:15:42.176  sys	0m6.343s
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:15:42.176  ************************************
00:15:42.176  START TEST dd_rw_offset
00:15:42.176  ************************************
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable
00:15:42.176   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:15:42.439   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 ))
00:15:42.439   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=vr5fd28dc24p20qhqxt9fby9e2tl0uf8qoz2t7srub06zx69vnv9uiyx9tgwf42oaecenmrlivvg6ehhf1oa9frrpum0a95oopaxv5fli9gsdlmbi7y5d1uyqof1wf6ppnu6w15p6625p4h4vss4nrprca84yzuvp0ik8reszxp0vtqzue3j8s1nl7i4jdlcvsqmsacotngx63hparnf5uduv20xp16ln2c34n3kns2p1rctsuxc0zzkoat9v6bdhz3hgb4nt5hwoqa7od15vl05uk5l1rk5ln7aramytmqrffdd0vsdt81zig1c9orgtoemcc9qn8fo2h1n53yrnbyqfoxiqvbo89rrvmzbceeaoqiwhvji8j3gop082w00ozcquddddevzzvs848l72x9asa615ikyp1soi7nvi1ukyni7q4qubmwrav79mtlbjiqoqvaigqqvuygm2tl83wqsdpth84j05rnnvo7gt5istrfw886j8p3pqwstt2o0h90nsyhobaoslltnaogjapmujftzs18tl86p66kq0gdto3bltbns7zaxoml8peod6ilzdympkjswcpp7ndzmlhb4ep9wkoqxo86offhm8dlj2z2hixhrl10kz3k1ly1u1lj158pg7slpej7kpntgca7nftw7rgzckx32a72vq3zv6sf9tt9048lhyq3lknqvuk1hhtdfid5nnkl4oyk0efng10azs8o7hmplhku8g02om2mmlp21th6h6m02jonj9ll9zq0is422ye1dk25jv1r93y41zj5k65p01msyuyhteagfdd8nhjtdoy92z7gdgm83x6b5ni4aev7v60ppjf3g0alg1tq3yx51crq9o3w0neszw9tiik6ckcvx2qt25y1np22lljihq6770vnc6pev4t4ibt9ijbwltc1feacedq0lym3nivk0kho9ahv9f1vqg1595w9mrms1zhd5p1xb71duin0plkmlwlg6fefjhl3ti407bqa6we62tupeogo0ekdi7qw5wuu077grtkm8kt0pzc40ki5xelyldqi4smubla6wkkaf2cn5267dokle5egjcbwf655owh7lmzhh0ft1u4qk348n541zhuonxfyg0ekt4eoymthoa7jljwr43bh3n3jt7ad0cpm8zn61i9tmmunb8e6m5282w1q6pwupwynup0pft9erix189ihf0lar76t8zy2gk7tkujxvn4m4sy2647g80h8ne8uhbkg39pli9ig1mqwxow42kphby9qbjjvpu7qg39moid1wvaqsl65kil4a8wap8h9z776tuo9ya73wy5zk6g8gna6st81hc8h6uyvzld252ci3q66vqhrm6k87d4rdij61u59hb8c41v0itif7ame0l56ee3dgha89za6237wnlwfj3imt5szh43dmwhq4hsxd5iy0p6wrkee99ap2zuic4peufnwego74rje73w9buu1unwtk6h0l22vtbc55oem52b5o7azeat0g7huz8hjlq63od95d8d7krqdde1n1995ujkd02lo76fj4mtpzczjj2gu9zve0zhk1cfiukh1tei5xvme2f904zdoj481ek785cznz8kjmrzz9vrdt376y0ojy31zs6nsmhwzr0e0yl1ech6sac8dthdz7pi6xgwxwgns9okncwdumiu3u7gv26mc7klroor74i05pvtnojqinxie4fyxc0wyvkn9curysqnwmp5q37mzvrftk5i2chux4fnkwz8bxv1epo5be90z439j1w2lxdi1g50fvhg4aoxgdebtxfbew0hoawnc28lasy8u4wv74bte471measq9py7nz7nn3ah0sa9z6sp1g76n930a8hd1i3yu4kh5vybjr1kfz16eusapvtq9mer8mwlnvbs5yhcmsbn4tflohz6vwex30c0f5hg87lt5r3sx4k5hotz5askg370dlhgvej89943purqs57wgcndiyltguyfnn0kx6q41yqvy8kprxu7xvf4k374u1wq3mqck9ua5ao3l2674rh0s5tjqwae3rojf6urqv415baz202l812a5nrlukfryxu634kdub6ykeh89jbfkliq9h0pio24d10eszonury5o08a01bc0piw6ly3lwo3qtbgy7t3z042qxxvjtqm3tz0pjma5sh95a80pnolw1c5zj7mrdlihw9hc9z6fvvmrsg33glyurbmzw43i0m1rr86q4tmz0tegc8tpfytk97242qi35eaqskn7uqd3vwmrqd3kcu1uepy6j1us9ql331n4hpeqtjr3d6egkv5v9xtzkea24p9umnb7hhpwveallvdri037smdzf8mqyt8ztikd3lbkpuy2c5gtzfi4k8894xjrwir99zh9rh7h6wonx0r9m805l40grpm88183rmzqax2osq4bk970lk5yeftylsdibynbu48dn814t6ohp0ll4hbg5qb28v6thv5xyytouyh5vym3ieyue9no6ymjvsckzqd4dqnr5hu4fny4ajosae4vowa4t0u9q8t66x32oygrbjwi1ndn9c7d04o5dtu7yt28sjx3l4eabrgwuxn5y2fbzqb5aqu6lis8b8h2fy15jip2kltw46ohfsacumi40vscntfounakop3io767b5a4ebpx13s972ajdf7a6b9bm0n21aeeww2xbmddx9xixc1nfomx9xfup4x5akdav4ohc8loue18k3xs4wli89jo1gur4zymq6cdddehtd487xk3pxhnbnpleh78vsp9r95lbkiamwirmh1ts9mqcg8xv8ya2q3yiaqaral95rn3ywwlf2chxswcef6mvgs68o0kk30cqmi5ftk3d34psr7nppxllrxxxcq4adfe7cd1j64e10hcq1eb6gjr2svuxg19dpaphqop24xl5gdvwvhy7e0dti6o86w85ua4x0ewevhwbimsby2w27ityy9xz1akpluc7xtgjebpmxxdchcb9waupqq3gw0p0r8u06o8d2gojfb5t0maemxs4uzw6dbozm2mass0hmjlpeozkcsqln5l1m74ar2w72a4lga404tit30gaolecym56tkkfo2c694nokfn6fe4ayxfcb9tx1t6yecuflnucz5dhymu0kwpw7dzwsjeu9ausse1rvxclw95p3ynkpieo457qqvckkx4e88t94lzony7az0a58uosczn31048nu6iio0wprqcs7jir7u6jp96ob39uu2vepbgrermlpwohtnhkch4q908ehhtkidj0mcbr6orjwzct2tej0n7u75z2u4n3pklixga055digdarelcdvuwd60ho68q7x91n7ip2c504djetxgynxq3ktfk8qbq4cx9blkvdaxuaeipn78dze2u8q30p5aqpd1dqj6b0f7cb3lrhceux79zswx3hc0twpr5450dyundpi20wglh3qwn5unc20vdq7vcs6tf517zw1osirndlg5zdihrbnz9nmbzfhr4h7vr6c59nfnlomf3u9regwi4keb92md7h6z6ehg8xdyv4p283v96e8vf9eivxo3zsmtresfbx5sy1sug1gm2qo5z7j643ix2n2vgwnwej3s2vqyrmk7b7ea37fye5p8e1777w33ujsfz8mlwkmnuspra229fdvrpu0qlno0ttfxgk5nltjvf7v769831o6afgps5vsnxnj4baag0ppwyqrwiu4dssiqa0orsdy1chsrnf1j9o0mee8xug4apm0ejbzz1sba9kv2vsnerudi84v9l63dl0mg7u5ppvitczjapf48j1skvs31et9ea8n5m6u8cmy86baoolhjwdtngxqnzm7jst26isl56chmlbhoe7fe7wtqrhhqb5exwzxhd1eb4neezf9wnkt8ayey6yl15lpw9zsvdpujbqc3rb2sa2nu156i6cocv1qtn19hdx3u5hwj201e62m1yveweb3h9gzzsmov0efc48ziyf4kzs36kpjbea5w4tyvogcj0kc10io0gvy1937bpjeue9ltpx7l9pie0s76
00:15:42.439   13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62
00:15:42.439    13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf
00:15:42.439    13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable
00:15:42.439    13:52:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:15:42.439  {
00:15:42.439    "subsystems": [
00:15:42.439      {
00:15:42.439        "subsystem": "bdev",
00:15:42.439        "config": [
00:15:42.439          {
00:15:42.439            "params": {
00:15:42.439              "trtype": "pcie",
00:15:42.439              "traddr": "0000:00:10.0",
00:15:42.439              "name": "Nvme0"
00:15:42.439            },
00:15:42.439            "method": "bdev_nvme_attach_controller"
00:15:42.439          },
00:15:42.439          {
00:15:42.439            "method": "bdev_wait_for_examine"
00:15:42.439          }
00:15:42.439        ]
00:15:42.439      }
00:15:42.439    ]
00:15:42.439  }
00:15:42.439  [2024-12-11 13:52:25.067983] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:42.439  [2024-12-11 13:52:25.068177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76987 ]
00:15:42.700  [2024-12-11 13:52:25.271762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:42.700  [2024-12-11 13:52:25.445690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:43.267  
[2024-12-11T13:52:27.419Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:15:44.647  
00:15:44.647   13:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62
00:15:44.647    13:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf
00:15:44.647    13:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable
00:15:44.647    13:52:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:15:44.647  {
00:15:44.647    "subsystems": [
00:15:44.647      {
00:15:44.647        "subsystem": "bdev",
00:15:44.647        "config": [
00:15:44.647          {
00:15:44.647            "params": {
00:15:44.647              "trtype": "pcie",
00:15:44.647              "traddr": "0000:00:10.0",
00:15:44.647              "name": "Nvme0"
00:15:44.647            },
00:15:44.647            "method": "bdev_nvme_attach_controller"
00:15:44.647          },
00:15:44.647          {
00:15:44.647            "method": "bdev_wait_for_examine"
00:15:44.647          }
00:15:44.647        ]
00:15:44.647      }
00:15:44.647    ]
00:15:44.647  }
00:15:44.647  [2024-12-11 13:52:27.328823] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:44.647  [2024-12-11 13:52:27.328981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77023 ]
00:15:44.906  [2024-12-11 13:52:27.502396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:44.906  [2024-12-11 13:52:27.678257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:45.474  
[2024-12-11T13:52:29.626Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:15:46.854  
00:15:46.854   13:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ vr5fd28dc24p20qhqxt9fby9e2tl0uf8qoz2t7srub06zx69vnv9uiyx9tgwf42oaecenmrlivvg6ehhf1oa9frrpum0a95oopaxv5fli9gsdlmbi7y5d1uyqof1wf6ppnu6w15p6625p4h4vss4nrprca84yzuvp0ik8reszxp0vtqzue3j8s1nl7i4jdlcvsqmsacotngx63hparnf5uduv20xp16ln2c34n3kns2p1rctsuxc0zzkoat9v6bdhz3hgb4nt5hwoqa7od15vl05uk5l1rk5ln7aramytmqrffdd0vsdt81zig1c9orgtoemcc9qn8fo2h1n53yrnbyqfoxiqvbo89rrvmzbceeaoqiwhvji8j3gop082w00ozcquddddevzzvs848l72x9asa615ikyp1soi7nvi1ukyni7q4qubmwrav79mtlbjiqoqvaigqqvuygm2tl83wqsdpth84j05rnnvo7gt5istrfw886j8p3pqwstt2o0h90nsyhobaoslltnaogjapmujftzs18tl86p66kq0gdto3bltbns7zaxoml8peod6ilzdympkjswcpp7ndzmlhb4ep9wkoqxo86offhm8dlj2z2hixhrl10kz3k1ly1u1lj158pg7slpej7kpntgca7nftw7rgzckx32a72vq3zv6sf9tt9048lhyq3lknqvuk1hhtdfid5nnkl4oyk0efng10azs8o7hmplhku8g02om2mmlp21th6h6m02jonj9ll9zq0is422ye1dk25jv1r93y41zj5k65p01msyuyhteagfdd8nhjtdoy92z7gdgm83x6b5ni4aev7v60ppjf3g0alg1tq3yx51crq9o3w0neszw9tiik6ckcvx2qt25y1np22lljihq6770vnc6pev4t4ibt9ijbwltc1feacedq0lym3nivk0kho9ahv9f1vqg1595w9mrms1zhd5p1xb71duin0plkmlwlg6fefjhl3ti407bqa6we62tupeogo0ekdi7qw5wuu077grtkm8kt0pzc40ki5xelyldqi4smubla6wkkaf2cn5267dokle5egjcbwf655owh7lmzhh0ft1u4qk348n541zhuonxfyg0ekt4eoymthoa7jljwr43bh3n3jt7ad0cpm8zn61i9tmmunb8e6m5282w1q6pwupwynup0pft9erix189ihf0lar76t8zy2gk7tkujxvn4m4sy2647g80h8ne8uhbkg39pli9ig1mqwxow42kphby9qbjjvpu7qg39moid1wvaqsl65kil4a8wap8h9z776tuo9ya73wy5zk6g8gna6st81hc8h6uyvzld252ci3q66vqhrm6k87d4rdij61u59hb8c41v0itif7ame0l56ee3dgha89za6237wnlwfj3imt5szh43dmwhq4hsxd5iy0p6wrkee99ap2zuic4peufnwego74rje73w9buu1unwtk6h0l22vtbc55oem52b5o7azeat0g7huz8hjlq63od95d8d7krqdde1n1995ujkd02lo76fj4mtpzczjj2gu9zve0zhk1cfiukh1tei5xvme2f904zdoj481ek785cznz8kjmrzz9vrdt376y0ojy31zs6nsmhwzr0e0yl1ech6sac8dthdz7pi6xgwxwgns9okncwdumiu3u7gv26mc7klroor74i05pvtnojqinxie4fyxc0wyvkn9curysqnwmp5q37mzvrftk5i2chux4fnkwz8bxv1epo5be90z439j1w2lxdi1g50fvhg4aoxgdebtxfbew0hoawnc28lasy8u4wv74bte471measq9py7nz7nn3ah0sa9z6sp1g76n930a8hd1i3yu4kh5vybjr1kfz16eusapvtq9mer8mwlnvbs5yhcmsbn4tflohz6vwex30c0f5hg87lt5r3sx4k5hotz5askg370dlhgvej89943purqs57wgcndiyltguyfnn0kx6q41yqvy8kprxu7xvf4k374u1wq3mqck9ua5ao3l2674rh0s5tjqwae3rojf6urqv415baz202l812a5nrlukfryxu634kdub6ykeh89jbfkliq9h0pio24d10eszonury5o08a01bc0piw6ly3lwo3qtbgy7t3z042qxxvjtqm3tz0pjma5sh95a80pnolw1c5zj7mrdlihw9hc9z6fvvmrsg33glyurbmzw43i0m1rr86q4tmz0tegc8tpfytk97242qi35eaqskn7uqd3vwmrqd3kcu1uepy6j1us9ql331n4hpeqtjr3d6egkv5v9xtzkea24p9umnb7hhpwveallvdri037smdzf8mqyt8ztikd3lbkpuy2c5gtzfi4k8894xjrwir99zh9rh7h6wonx0r9m805l40grpm88183rmzqax2osq4bk970lk5yeftylsdibynbu48dn814t6ohp0ll4hbg5qb28v6thv5xyytouyh5vym3ieyue9no6ymjvsckzqd4dqnr5hu4fny4ajosae4vowa4t0u9q8t66x32oygrbjwi1ndn9c7d04o5dtu7yt28sjx3l4eabrgwuxn5y2fbzqb5aqu6lis8b8h2fy15jip2kltw46ohfsacumi40vscntfounakop3io767b5a4ebpx13s972ajdf7a6b9bm0n21aeeww2xbmddx9xixc1nfomx9xfup4x5akdav4ohc8loue18k3xs4wli89jo1gur4zymq6cdddehtd487xk3pxhnbnpleh78vsp9r95lbkiamwirmh1ts9mqcg8xv8ya2q3yiaqaral95rn3ywwlf2chxswcef6mvgs68o0kk30cqmi5ftk3d34psr7nppxllrxxxcq4adfe7cd1j64e10hcq1eb6gjr2svuxg19dpaphqop24xl5gdvwvhy7e0dti6o86w85ua4x0ewevhwbimsby2w27ityy9xz1akpluc7xtgjebpmxxdchcb9waupqq3gw0p0r8u06o8d2gojfb5t0maemxs4uzw6dbozm2mass0hmjlpeozkcsqln5l1m74ar2w72a4lga404tit30gaolecym56tkkfo2c694nokfn6fe4ayxfcb9tx1t6yecuflnucz5dhymu0kwpw7dzwsjeu9ausse1rvxclw95p3ynkpieo457qqvckkx4e88t94lzony7az0a58uosczn31048nu6iio0wprqcs7jir7u6jp96ob39uu2vepbgrermlpwohtnhkch4q908ehhtkidj0mcbr6orjwzct2tej0n7u75z2u4n3pklixga055digdarelcdvuwd60ho68q7x91n7ip2c504djetxgynxq3ktfk8qbq4cx9blkvdaxuaeipn78dze2u8q30p5aqpd1dqj6b0f7cb3lrhceux79zswx3hc0twpr5450dyundpi20wglh3qwn5unc20vdq7vcs6tf517zw1osirndlg5zdihrbnz9nmbzfhr4h7vr6c59nfnlomf3u9regwi4keb92md7h6z6ehg8xdyv4p283v96e8vf9eivxo3zsmtresfbx5sy1sug1gm2qo5z7j643ix2n2vgwnwej3s2vqyrmk7b7ea37fye5p8e1777w33ujsfz8mlwkmnuspra229fdvrpu0qlno0ttfxgk5nltjvf7v769831o6afgps5vsnxnj4baag0ppwyqrwiu4dssiqa0orsdy1chsrnf1j9o0mee8xug4apm0ejbzz1sba9kv2vsnerudi84v9l63dl0mg7u5ppvitczjapf48j1skvs31et9ea8n5m6u8cmy86baoolhjwdtngxqnzm7jst26isl56chmlbhoe7fe7wtqrhhqb5exwzxhd1eb4neezf9wnkt8ayey6yl15lpw9zsvdpujbqc3rb2sa2nu156i6cocv1qtn19hdx3u5hwj201e62m1yveweb3h9gzzsmov0efc48ziyf4kzs36kpjbea5w4tyvogcj0kc10io0gvy1937bpjeue9ltpx7l9pie0s76 == \v\r\5\f\d\2\8\d\c\2\4\p\2\0\q\h\q\x\t\9\f\b\y\9\e\2\t\l\0\u\f\8\q\o\z\2\t\7\s\r\u\b\0\6\z\x\6\9\v\n\v\9\u\i\y\x\9\t\g\w\f\4\2\o\a\e\c\e\n\m\r\l\i\v\v\g\6\e\h\h\f\1\o\a\9\f\r\r\p\u\m\0\a\9\5\o\o\p\a\x\v\5\f\l\i\9\g\s\d\l\m\b\i\7\y\5\d\1\u\y\q\o\f\1\w\f\6\p\p\n\u\6\w\1\5\p\6\6\2\5\p\4\h\4\v\s\s\4\n\r\p\r\c\a\8\4\y\z\u\v\p\0\i\k\8\r\e\s\z\x\p\0\v\t\q\z\u\e\3\j\8\s\1\n\l\7\i\4\j\d\l\c\v\s\q\m\s\a\c\o\t\n\g\x\6\3\h\p\a\r\n\f\5\u\d\u\v\2\0\x\p\1\6\l\n\2\c\3\4\n\3\k\n\s\2\p\1\r\c\t\s\u\x\c\0\z\z\k\o\a\t\9\v\6\b\d\h\z\3\h\g\b\4\n\t\5\h\w\o\q\a\7\o\d\1\5\v\l\0\5\u\k\5\l\1\r\k\5\l\n\7\a\r\a\m\y\t\m\q\r\f\f\d\d\0\v\s\d\t\8\1\z\i\g\1\c\9\o\r\g\t\o\e\m\c\c\9\q\n\8\f\o\2\h\1\n\5\3\y\r\n\b\y\q\f\o\x\i\q\v\b\o\8\9\r\r\v\m\z\b\c\e\e\a\o\q\i\w\h\v\j\i\8\j\3\g\o\p\0\8\2\w\0\0\o\z\c\q\u\d\d\d\d\e\v\z\z\v\s\8\4\8\l\7\2\x\9\a\s\a\6\1\5\i\k\y\p\1\s\o\i\7\n\v\i\1\u\k\y\n\i\7\q\4\q\u\b\m\w\r\a\v\7\9\m\t\l\b\j\i\q\o\q\v\a\i\g\q\q\v\u\y\g\m\2\t\l\8\3\w\q\s\d\p\t\h\8\4\j\0\5\r\n\n\v\o\7\g\t\5\i\s\t\r\f\w\8\8\6\j\8\p\3\p\q\w\s\t\t\2\o\0\h\9\0\n\s\y\h\o\b\a\o\s\l\l\t\n\a\o\g\j\a\p\m\u\j\f\t\z\s\1\8\t\l\8\6\p\6\6\k\q\0\g\d\t\o\3\b\l\t\b\n\s\7\z\a\x\o\m\l\8\p\e\o\d\6\i\l\z\d\y\m\p\k\j\s\w\c\p\p\7\n\d\z\m\l\h\b\4\e\p\9\w\k\o\q\x\o\8\6\o\f\f\h\m\8\d\l\j\2\z\2\h\i\x\h\r\l\1\0\k\z\3\k\1\l\y\1\u\1\l\j\1\5\8\p\g\7\s\l\p\e\j\7\k\p\n\t\g\c\a\7\n\f\t\w\7\r\g\z\c\k\x\3\2\a\7\2\v\q\3\z\v\6\s\f\9\t\t\9\0\4\8\l\h\y\q\3\l\k\n\q\v\u\k\1\h\h\t\d\f\i\d\5\n\n\k\l\4\o\y\k\0\e\f\n\g\1\0\a\z\s\8\o\7\h\m\p\l\h\k\u\8\g\0\2\o\m\2\m\m\l\p\2\1\t\h\6\h\6\m\0\2\j\o\n\j\9\l\l\9\z\q\0\i\s\4\2\2\y\e\1\d\k\2\5\j\v\1\r\9\3\y\4\1\z\j\5\k\6\5\p\0\1\m\s\y\u\y\h\t\e\a\g\f\d\d\8\n\h\j\t\d\o\y\9\2\z\7\g\d\g\m\8\3\x\6\b\5\n\i\4\a\e\v\7\v\6\0\p\p\j\f\3\g\0\a\l\g\1\t\q\3\y\x\5\1\c\r\q\9\o\3\w\0\n\e\s\z\w\9\t\i\i\k\6\c\k\c\v\x\2\q\t\2\5\y\1\n\p\2\2\l\l\j\i\h\q\6\7\7\0\v\n\c\6\p\e\v\4\t\4\i\b\t\9\i\j\b\w\l\t\c\1\f\e\a\c\e\d\q\0\l\y\m\3\n\i\v\k\0\k\h\o\9\a\h\v\9\f\1\v\q\g\1\5\9\5\w\9\m\r\m\s\1\z\h\d\5\p\1\x\b\7\1\d\u\i\n\0\p\l\k\m\l\w\l\g\6\f\e\f\j\h\l\3\t\i\4\0\7\b\q\a\6\w\e\6\2\t\u\p\e\o\g\o\0\e\k\d\i\7\q\w\5\w\u\u\0\7\7\g\r\t\k\m\8\k\t\0\p\z\c\4\0\k\i\5\x\e\l\y\l\d\q\i\4\s\m\u\b\l\a\6\w\k\k\a\f\2\c\n\5\2\6\7\d\o\k\l\e\5\e\g\j\c\b\w\f\6\5\5\o\w\h\7\l\m\z\h\h\0\f\t\1\u\4\q\k\3\4\8\n\5\4\1\z\h\u\o\n\x\f\y\g\0\e\k\t\4\e\o\y\m\t\h\o\a\7\j\l\j\w\r\4\3\b\h\3\n\3\j\t\7\a\d\0\c\p\m\8\z\n\6\1\i\9\t\m\m\u\n\b\8\e\6\m\5\2\8\2\w\1\q\6\p\w\u\p\w\y\n\u\p\0\p\f\t\9\e\r\i\x\1\8\9\i\h\f\0\l\a\r\7\6\t\8\z\y\2\g\k\7\t\k\u\j\x\v\n\4\m\4\s\y\2\6\4\7\g\8\0\h\8\n\e\8\u\h\b\k\g\3\9\p\l\i\9\i\g\1\m\q\w\x\o\w\4\2\k\p\h\b\y\9\q\b\j\j\v\p\u\7\q\g\3\9\m\o\i\d\1\w\v\a\q\s\l\6\5\k\i\l\4\a\8\w\a\p\8\h\9\z\7\7\6\t\u\o\9\y\a\7\3\w\y\5\z\k\6\g\8\g\n\a\6\s\t\8\1\h\c\8\h\6\u\y\v\z\l\d\2\5\2\c\i\3\q\6\6\v\q\h\r\m\6\k\8\7\d\4\r\d\i\j\6\1\u\5\9\h\b\8\c\4\1\v\0\i\t\i\f\7\a\m\e\0\l\5\6\e\e\3\d\g\h\a\8\9\z\a\6\2\3\7\w\n\l\w\f\j\3\i\m\t\5\s\z\h\4\3\d\m\w\h\q\4\h\s\x\d\5\i\y\0\p\6\w\r\k\e\e\9\9\a\p\2\z\u\i\c\4\p\e\u\f\n\w\e\g\o\7\4\r\j\e\7\3\w\9\b\u\u\1\u\n\w\t\k\6\h\0\l\2\2\v\t\b\c\5\5\o\e\m\5\2\b\5\o\7\a\z\e\a\t\0\g\7\h\u\z\8\h\j\l\q\6\3\o\d\9\5\d\8\d\7\k\r\q\d\d\e\1\n\1\9\9\5\u\j\k\d\0\2\l\o\7\6\f\j\4\m\t\p\z\c\z\j\j\2\g\u\9\z\v\e\0\z\h\k\1\c\f\i\u\k\h\1\t\e\i\5\x\v\m\e\2\f\9\0\4\z\d\o\j\4\8\1\e\k\7\8\5\c\z\n\z\8\k\j\m\r\z\z\9\v\r\d\t\3\7\6\y\0\o\j\y\3\1\z\s\6\n\s\m\h\w\z\r\0\e\0\y\l\1\e\c\h\6\s\a\c\8\d\t\h\d\z\7\p\i\6\x\g\w\x\w\g\n\s\9\o\k\n\c\w\d\u\m\i\u\3\u\7\g\v\2\6\m\c\7\k\l\r\o\o\r\7\4\i\0\5\p\v\t\n\o\j\q\i\n\x\i\e\4\f\y\x\c\0\w\y\v\k\n\9\c\u\r\y\s\q\n\w\m\p\5\q\3\7\m\z\v\r\f\t\k\5\i\2\c\h\u\x\4\f\n\k\w\z\8\b\x\v\1\e\p\o\5\b\e\9\0\z\4\3\9\j\1\w\2\l\x\d\i\1\g\5\0\f\v\h\g\4\a\o\x\g\d\e\b\t\x\f\b\e\w\0\h\o\a\w\n\c\2\8\l\a\s\y\8\u\4\w\v\7\4\b\t\e\4\7\1\m\e\a\s\q\9\p\y\7\n\z\7\n\n\3\a\h\0\s\a\9\z\6\s\p\1\g\7\6\n\9\3\0\a\8\h\d\1\i\3\y\u\4\k\h\5\v\y\b\j\r\1\k\f\z\1\6\e\u\s\a\p\v\t\q\9\m\e\r\8\m\w\l\n\v\b\s\5\y\h\c\m\s\b\n\4\t\f\l\o\h\z\6\v\w\e\x\3\0\c\0\f\5\h\g\8\7\l\t\5\r\3\s\x\4\k\5\h\o\t\z\5\a\s\k\g\3\7\0\d\l\h\g\v\e\j\8\9\9\4\3\p\u\r\q\s\5\7\w\g\c\n\d\i\y\l\t\g\u\y\f\n\n\0\k\x\6\q\4\1\y\q\v\y\8\k\p\r\x\u\7\x\v\f\4\k\3\7\4\u\1\w\q\3\m\q\c\k\9\u\a\5\a\o\3\l\2\6\7\4\r\h\0\s\5\t\j\q\w\a\e\3\r\o\j\f\6\u\r\q\v\4\1\5\b\a\z\2\0\2\l\8\1\2\a\5\n\r\l\u\k\f\r\y\x\u\6\3\4\k\d\u\b\6\y\k\e\h\8\9\j\b\f\k\l\i\q\9\h\0\p\i\o\2\4\d\1\0\e\s\z\o\n\u\r\y\5\o\0\8\a\0\1\b\c\0\p\i\w\6\l\y\3\l\w\o\3\q\t\b\g\y\7\t\3\z\0\4\2\q\x\x\v\j\t\q\m\3\t\z\0\p\j\m\a\5\s\h\9\5\a\8\0\p\n\o\l\w\1\c\5\z\j\7\m\r\d\l\i\h\w\9\h\c\9\z\6\f\v\v\m\r\s\g\3\3\g\l\y\u\r\b\m\z\w\4\3\i\0\m\1\r\r\8\6\q\4\t\m\z\0\t\e\g\c\8\t\p\f\y\t\k\9\7\2\4\2\q\i\3\5\e\a\q\s\k\n\7\u\q\d\3\v\w\m\r\q\d\3\k\c\u\1\u\e\p\y\6\j\1\u\s\9\q\l\3\3\1\n\4\h\p\e\q\t\j\r\3\d\6\e\g\k\v\5\v\9\x\t\z\k\e\a\2\4\p\9\u\m\n\b\7\h\h\p\w\v\e\a\l\l\v\d\r\i\0\3\7\s\m\d\z\f\8\m\q\y\t\8\z\t\i\k\d\3\l\b\k\p\u\y\2\c\5\g\t\z\f\i\4\k\8\8\9\4\x\j\r\w\i\r\9\9\z\h\9\r\h\7\h\6\w\o\n\x\0\r\9\m\8\0\5\l\4\0\g\r\p\m\8\8\1\8\3\r\m\z\q\a\x\2\o\s\q\4\b\k\9\7\0\l\k\5\y\e\f\t\y\l\s\d\i\b\y\n\b\u\4\8\d\n\8\1\4\t\6\o\h\p\0\l\l\4\h\b\g\5\q\b\2\8\v\6\t\h\v\5\x\y\y\t\o\u\y\h\5\v\y\m\3\i\e\y\u\e\9\n\o\6\y\m\j\v\s\c\k\z\q\d\4\d\q\n\r\5\h\u\4\f\n\y\4\a\j\o\s\a\e\4\v\o\w\a\4\t\0\u\9\q\8\t\6\6\x\3\2\o\y\g\r\b\j\w\i\1\n\d\n\9\c\7\d\0\4\o\5\d\t\u\7\y\t\2\8\s\j\x\3\l\4\e\a\b\r\g\w\u\x\n\5\y\2\f\b\z\q\b\5\a\q\u\6\l\i\s\8\b\8\h\2\f\y\1\5\j\i\p\2\k\l\t\w\4\6\o\h\f\s\a\c\u\m\i\4\0\v\s\c\n\t\f\o\u\n\a\k\o\p\3\i\o\7\6\7\b\5\a\4\e\b\p\x\1\3\s\9\7\2\a\j\d\f\7\a\6\b\9\b\m\0\n\2\1\a\e\e\w\w\2\x\b\m\d\d\x\9\x\i\x\c\1\n\f\o\m\x\9\x\f\u\p\4\x\5\a\k\d\a\v\4\o\h\c\8\l\o\u\e\1\8\k\3\x\s\4\w\l\i\8\9\j\o\1\g\u\r\4\z\y\m\q\6\c\d\d\d\e\h\t\d\4\8\7\x\k\3\p\x\h\n\b\n\p\l\e\h\7\8\v\s\p\9\r\9\5\l\b\k\i\a\m\w\i\r\m\h\1\t\s\9\m\q\c\g\8\x\v\8\y\a\2\q\3\y\i\a\q\a\r\a\l\9\5\r\n\3\y\w\w\l\f\2\c\h\x\s\w\c\e\f\6\m\v\g\s\6\8\o\0\k\k\3\0\c\q\m\i\5\f\t\k\3\d\3\4\p\s\r\7\n\p\p\x\l\l\r\x\x\x\c\q\4\a\d\f\e\7\c\d\1\j\6\4\e\1\0\h\c\q\1\e\b\6\g\j\r\2\s\v\u\x\g\1\9\d\p\a\p\h\q\o\p\2\4\x\l\5\g\d\v\w\v\h\y\7\e\0\d\t\i\6\o\8\6\w\8\5\u\a\4\x\0\e\w\e\v\h\w\b\i\m\s\b\y\2\w\2\7\i\t\y\y\9\x\z\1\a\k\p\l\u\c\7\x\t\g\j\e\b\p\m\x\x\d\c\h\c\b\9\w\a\u\p\q\q\3\g\w\0\p\0\r\8\u\0\6\o\8\d\2\g\o\j\f\b\5\t\0\m\a\e\m\x\s\4\u\z\w\6\d\b\o\z\m\2\m\a\s\s\0\h\m\j\l\p\e\o\z\k\c\s\q\l\n\5\l\1\m\7\4\a\r\2\w\7\2\a\4\l\g\a\4\0\4\t\i\t\3\0\g\a\o\l\e\c\y\m\5\6\t\k\k\f\o\2\c\6\9\4\n\o\k\f\n\6\f\e\4\a\y\x\f\c\b\9\t\x\1\t\6\y\e\c\u\f\l\n\u\c\z\5\d\h\y\m\u\0\k\w\p\w\7\d\z\w\s\j\e\u\9\a\u\s\s\e\1\r\v\x\c\l\w\9\5\p\3\y\n\k\p\i\e\o\4\5\7\q\q\v\c\k\k\x\4\e\8\8\t\9\4\l\z\o\n\y\7\a\z\0\a\5\8\u\o\s\c\z\n\3\1\0\4\8\n\u\6\i\i\o\0\w\p\r\q\c\s\7\j\i\r\7\u\6\j\p\9\6\o\b\3\9\u\u\2\v\e\p\b\g\r\e\r\m\l\p\w\o\h\t\n\h\k\c\h\4\q\9\0\8\e\h\h\t\k\i\d\j\0\m\c\b\r\6\o\r\j\w\z\c\t\2\t\e\j\0\n\7\u\7\5\z\2\u\4\n\3\p\k\l\i\x\g\a\0\5\5\d\i\g\d\a\r\e\l\c\d\v\u\w\d\6\0\h\o\6\8\q\7\x\9\1\n\7\i\p\2\c\5\0\4\d\j\e\t\x\g\y\n\x\q\3\k\t\f\k\8\q\b\q\4\c\x\9\b\l\k\v\d\a\x\u\a\e\i\p\n\7\8\d\z\e\2\u\8\q\3\0\p\5\a\q\p\d\1\d\q\j\6\b\0\f\7\c\b\3\l\r\h\c\e\u\x\7\9\z\s\w\x\3\h\c\0\t\w\p\r\5\4\5\0\d\y\u\n\d\p\i\2\0\w\g\l\h\3\q\w\n\5\u\n\c\2\0\v\d\q\7\v\c\s\6\t\f\5\1\7\z\w\1\o\s\i\r\n\d\l\g\5\z\d\i\h\r\b\n\z\9\n\m\b\z\f\h\r\4\h\7\v\r\6\c\5\9\n\f\n\l\o\m\f\3\u\9\r\e\g\w\i\4\k\e\b\9\2\m\d\7\h\6\z\6\e\h\g\8\x\d\y\v\4\p\2\8\3\v\9\6\e\8\v\f\9\e\i\v\x\o\3\z\s\m\t\r\e\s\f\b\x\5\s\y\1\s\u\g\1\g\m\2\q\o\5\z\7\j\6\4\3\i\x\2\n\2\v\g\w\n\w\e\j\3\s\2\v\q\y\r\m\k\7\b\7\e\a\3\7\f\y\e\5\p\8\e\1\7\7\7\w\3\3\u\j\s\f\z\8\m\l\w\k\m\n\u\s\p\r\a\2\2\9\f\d\v\r\p\u\0\q\l\n\o\0\t\t\f\x\g\k\5\n\l\t\j\v\f\7\v\7\6\9\8\3\1\o\6\a\f\g\p\s\5\v\s\n\x\n\j\4\b\a\a\g\0\p\p\w\y\q\r\w\i\u\4\d\s\s\i\q\a\0\o\r\s\d\y\1\c\h\s\r\n\f\1\j\9\o\0\m\e\e\8\x\u\g\4\a\p\m\0\e\j\b\z\z\1\s\b\a\9\k\v\2\v\s\n\e\r\u\d\i\8\4\v\9\l\6\3\d\l\0\m\g\7\u\5\p\p\v\i\t\c\z\j\a\p\f\4\8\j\1\s\k\v\s\3\1\e\t\9\e\a\8\n\5\m\6\u\8\c\m\y\8\6\b\a\o\o\l\h\j\w\d\t\n\g\x\q\n\z\m\7\j\s\t\2\6\i\s\l\5\6\c\h\m\l\b\h\o\e\7\f\e\7\w\t\q\r\h\h\q\b\5\e\x\w\z\x\h\d\1\e\b\4\n\e\e\z\f\9\w\n\k\t\8\a\y\e\y\6\y\l\1\5\l\p\w\9\z\s\v\d\p\u\j\b\q\c\3\r\b\2\s\a\2\n\u\1\5\6\i\6\c\o\c\v\1\q\t\n\1\9\h\d\x\3\u\5\h\w\j\2\0\1\e\6\2\m\1\y\v\e\w\e\b\3\h\9\g\z\z\s\m\o\v\0\e\f\c\4\8\z\i\y\f\4\k\z\s\3\6\k\p\j\b\e\a\5\w\4\t\y\v\o\g\c\j\0\k\c\1\0\i\o\0\g\v\y\1\9\3\7\b\p\j\e\u\e\9\l\t\p\x\7\l\9\p\i\e\0\s\7\6 ]]
00:15:46.855  
00:15:46.855  real	0m4.442s
00:15:46.855  user	0m3.557s
00:15:46.855  sys	0m0.712s
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:15:46.855  ************************************
00:15:46.855  END TEST dd_rw_offset
00:15:46.855  ************************************
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref=
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1
00:15:46.855   13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:15:46.855    13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf
00:15:46.855    13:52:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable
00:15:46.855    13:52:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:15:46.855  {
00:15:46.855    "subsystems": [
00:15:46.855      {
00:15:46.855        "subsystem": "bdev",
00:15:46.855        "config": [
00:15:46.855          {
00:15:46.855            "params": {
00:15:46.855              "trtype": "pcie",
00:15:46.855              "traddr": "0000:00:10.0",
00:15:46.855              "name": "Nvme0"
00:15:46.855            },
00:15:46.855            "method": "bdev_nvme_attach_controller"
00:15:46.855          },
00:15:46.855          {
00:15:46.855            "method": "bdev_wait_for_examine"
00:15:46.855          }
00:15:46.855        ]
00:15:46.855      }
00:15:46.855    ]
00:15:46.855  }
00:15:46.855  [2024-12-11 13:52:29.502093] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:46.855  [2024-12-11 13:52:29.502270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77068 ]
00:15:47.114  [2024-12-11 13:52:29.681371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:47.114  [2024-12-11 13:52:29.840230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:47.682  
[2024-12-11T13:52:31.834Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:15:49.062  
00:15:49.062   13:52:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:49.062  
00:15:49.062  real	0m49.207s
00:15:49.062  user	0m39.059s
00:15:49.062  sys	0m8.203s
00:15:49.062   13:52:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:49.062   13:52:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:15:49.062  ************************************
00:15:49.062  END TEST spdk_dd_basic_rw
00:15:49.062  ************************************
00:15:49.062   13:52:31 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:15:49.062   13:52:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:49.062   13:52:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:49.062   13:52:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:15:49.062  ************************************
00:15:49.062  START TEST spdk_dd_posix
00:15:49.062  ************************************
00:15:49.062   13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:15:49.062  * Looking for test storage...
00:15:49.062  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:15:49.062     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:49.062      13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version
00:15:49.062      13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-:
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-:
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<'
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:49.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:49.323  		--rc genhtml_branch_coverage=1
00:15:49.323  		--rc genhtml_function_coverage=1
00:15:49.323  		--rc genhtml_legend=1
00:15:49.323  		--rc geninfo_all_blocks=1
00:15:49.323  		--rc geninfo_unexecuted_blocks=1
00:15:49.323  		
00:15:49.323  		'
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:49.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:49.323  		--rc genhtml_branch_coverage=1
00:15:49.323  		--rc genhtml_function_coverage=1
00:15:49.323  		--rc genhtml_legend=1
00:15:49.323  		--rc geninfo_all_blocks=1
00:15:49.323  		--rc geninfo_unexecuted_blocks=1
00:15:49.323  		
00:15:49.323  		'
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:49.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:49.323  		--rc genhtml_branch_coverage=1
00:15:49.323  		--rc genhtml_function_coverage=1
00:15:49.323  		--rc genhtml_legend=1
00:15:49.323  		--rc geninfo_all_blocks=1
00:15:49.323  		--rc geninfo_unexecuted_blocks=1
00:15:49.323  		
00:15:49.323  		'
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:49.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:49.323  		--rc genhtml_branch_coverage=1
00:15:49.323  		--rc genhtml_function_coverage=1
00:15:49.323  		--rc genhtml_legend=1
00:15:49.323  		--rc geninfo_all_blocks=1
00:15:49.323  		--rc geninfo_unexecuted_blocks=1
00:15:49.323  		
00:15:49.323  		'
00:15:49.323    13:52:31 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:49.323     13:52:31 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # export PATH
00:15:49.323      13:52:31 spdk_dd.spdk_dd_posix -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO'
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use'
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO'
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use'
00:15:49.323  * First test run, liburing in use
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:49.323   13:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:15:49.323  ************************************
00:15:49.324  START TEST dd_flag_append
00:15:49.324  ************************************
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1
00:15:49.324    13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32
00:15:49.324    13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable
00:15:49.324    13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=c58j4zoo0yh3c2lb3vnv8uvob6us8n3x
00:15:49.324    13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32
00:15:49.324    13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable
00:15:49.324    13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=b34qeherecsj1ssn4sfbnjx72ldwrddq
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s c58j4zoo0yh3c2lb3vnv8uvob6us8n3x
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s b34qeherecsj1ssn4sfbnjx72ldwrddq
00:15:49.324   13:52:31 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:15:49.324  [2024-12-11 13:52:32.002429] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:49.324  [2024-12-11 13:52:32.002623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77151 ]
00:15:49.583  [2024-12-11 13:52:32.197501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:49.583  [2024-12-11 13:52:32.354318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:50.151  
[2024-12-11T13:52:34.302Z] Copying: 32/32 [B] (average 31 kBps)
00:15:51.530  
00:15:51.530  ************************************
00:15:51.530  END TEST dd_flag_append
00:15:51.530  ************************************
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ b34qeherecsj1ssn4sfbnjx72ldwrddqc58j4zoo0yh3c2lb3vnv8uvob6us8n3x == \b\3\4\q\e\h\e\r\e\c\s\j\1\s\s\n\4\s\f\b\n\j\x\7\2\l\d\w\r\d\d\q\c\5\8\j\4\z\o\o\0\y\h\3\c\2\l\b\3\v\n\v\8\u\v\o\b\6\u\s\8\n\3\x ]]
00:15:51.530  
00:15:51.530  real	0m2.126s
00:15:51.530  user	0m1.653s
00:15:51.530  sys	0m0.362s
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:15:51.530  ************************************
00:15:51.530  START TEST dd_flag_directory
00:15:51.530  ************************************
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:51.530    13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:51.530    13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:51.530   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:51.531   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:51.531   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:15:51.531   13:52:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:15:51.531  [2024-12-11 13:52:34.201097] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:51.531  [2024-12-11 13:52:34.201291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77195 ]
00:15:51.790  [2024-12-11 13:52:34.395727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:51.790  [2024-12-11 13:52:34.560803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:52.358  [2024-12-11 13:52:34.989332] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:15:52.358  [2024-12-11 13:52:34.989419] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:15:52.358  [2024-12-11 13:52:34.989448] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:15:53.295  [2024-12-11 13:52:35.984567] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:15:53.554   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236
00:15:53.554   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:53.554   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108
00:15:53.554   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in
00:15:53.554   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:53.555    13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:53.555    13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:15:53.555   13:52:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:15:53.814  [2024-12-11 13:52:36.377079] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:53.814  [2024-12-11 13:52:36.377263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77222 ]
00:15:53.814  [2024-12-11 13:52:36.570416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:54.073  [2024-12-11 13:52:36.723623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:54.642  [2024-12-11 13:52:37.149483] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:15:54.642  [2024-12-11 13:52:37.149566] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:15:54.642  [2024-12-11 13:52:37.149611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:15:55.579  [2024-12-11 13:52:38.136347] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:55.839  
00:15:55.839  real	0m4.310s
00:15:55.839  user	0m3.323s
00:15:55.839  sys	0m0.787s
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x
00:15:55.839  ************************************
00:15:55.839  END TEST dd_flag_directory
00:15:55.839  ************************************
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:15:55.839  ************************************
00:15:55.839  START TEST dd_flag_nofollow
00:15:55.839  ************************************
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:55.839    13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:55.839    13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:15:55.839   13:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:15:55.839  [2024-12-11 13:52:38.572660] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:55.839  [2024-12-11 13:52:38.572852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77274 ]
00:15:56.099  [2024-12-11 13:52:38.767805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:56.358  [2024-12-11 13:52:38.924374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:56.617  [2024-12-11 13:52:39.342457] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:15:56.617  [2024-12-11 13:52:39.342543] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:15:56.617  [2024-12-11 13:52:39.342573] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:15:57.554  [2024-12-11 13:52:40.317804] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:58.122    13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:58.122    13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:15:58.122   13:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:15:58.122  [2024-12-11 13:52:40.681940] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:15:58.122  [2024-12-11 13:52:40.682135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77301 ]
00:15:58.122  [2024-12-11 13:52:40.874879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:58.381  [2024-12-11 13:52:41.029915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:58.949  [2024-12-11 13:52:41.462055] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:15:58.949  [2024-12-11 13:52:41.462148] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:15:58.949  [2024-12-11 13:52:41.462183] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:15:59.887  [2024-12-11 13:52:42.411514] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x
00:16:00.146   13:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:00.146  [2024-12-11 13:52:42.792165] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:00.146  [2024-12-11 13:52:42.792360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77326 ]
00:16:00.405  [2024-12-11 13:52:42.987001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:00.405  [2024-12-11 13:52:43.139549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:00.974  
[2024-12-11T13:52:45.126Z] Copying: 512/512 [B] (average 500 kBps)
00:16:02.354  
00:16:02.354   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ czxpknuht1bxcouk2pibu2gqoe2tus7nedqjwrz4zjvdxxsm9o0rrayug43qturx8glv7eommwko1cpku9xuvlux11w2fmj905l89endtxjy27ajgiz5ylkcansqenft87qwz6tv940lwkhygrbw5ehv6hxnserbywx3d6cokcyj6gq3hbo1x126ph8crfv50xjlln7r8932t7ql9gcowpps75ovs0lm61onn0m9zbujbwi3l2doepq5s1qscvbbkh2zhpy4x5yj6ec10pg8urn8xao4g9uan9lm7c3657fz4ui3aeshbtqaureoy9qob5miwkgop4oh80rpdx8m2g3o517suxwjjno6aej5vdyiagggv1h45ph7lhmz6aq0ng0xncqppuyxe3xtfjpwd50l3ojvkg65ch0xyyd04cp9o3jyax7wv32ho1nnmft5z1huvz0lescsjlo67fimrdnlqrgve2txve2n0gpdn4aha5gwzl63v8j78oqsitu0 == \c\z\x\p\k\n\u\h\t\1\b\x\c\o\u\k\2\p\i\b\u\2\g\q\o\e\2\t\u\s\7\n\e\d\q\j\w\r\z\4\z\j\v\d\x\x\s\m\9\o\0\r\r\a\y\u\g\4\3\q\t\u\r\x\8\g\l\v\7\e\o\m\m\w\k\o\1\c\p\k\u\9\x\u\v\l\u\x\1\1\w\2\f\m\j\9\0\5\l\8\9\e\n\d\t\x\j\y\2\7\a\j\g\i\z\5\y\l\k\c\a\n\s\q\e\n\f\t\8\7\q\w\z\6\t\v\9\4\0\l\w\k\h\y\g\r\b\w\5\e\h\v\6\h\x\n\s\e\r\b\y\w\x\3\d\6\c\o\k\c\y\j\6\g\q\3\h\b\o\1\x\1\2\6\p\h\8\c\r\f\v\5\0\x\j\l\l\n\7\r\8\9\3\2\t\7\q\l\9\g\c\o\w\p\p\s\7\5\o\v\s\0\l\m\6\1\o\n\n\0\m\9\z\b\u\j\b\w\i\3\l\2\d\o\e\p\q\5\s\1\q\s\c\v\b\b\k\h\2\z\h\p\y\4\x\5\y\j\6\e\c\1\0\p\g\8\u\r\n\8\x\a\o\4\g\9\u\a\n\9\l\m\7\c\3\6\5\7\f\z\4\u\i\3\a\e\s\h\b\t\q\a\u\r\e\o\y\9\q\o\b\5\m\i\w\k\g\o\p\4\o\h\8\0\r\p\d\x\8\m\2\g\3\o\5\1\7\s\u\x\w\j\j\n\o\6\a\e\j\5\v\d\y\i\a\g\g\g\v\1\h\4\5\p\h\7\l\h\m\z\6\a\q\0\n\g\0\x\n\c\q\p\p\u\y\x\e\3\x\t\f\j\p\w\d\5\0\l\3\o\j\v\k\g\6\5\c\h\0\x\y\y\d\0\4\c\p\9\o\3\j\y\a\x\7\w\v\3\2\h\o\1\n\n\m\f\t\5\z\1\h\u\v\z\0\l\e\s\c\s\j\l\o\6\7\f\i\m\r\d\n\l\q\r\g\v\e\2\t\x\v\e\2\n\0\g\p\d\n\4\a\h\a\5\g\w\z\l\6\3\v\8\j\7\8\o\q\s\i\t\u\0 ]]
00:16:02.354  
00:16:02.354  real	0m6.358s
00:16:02.354  user	0m4.925s
00:16:02.354  sys	0m1.118s
00:16:02.354  ************************************
00:16:02.354  END TEST dd_flag_nofollow
00:16:02.354  ************************************
00:16:02.354   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:02.354   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:02.355  ************************************
00:16:02.355  START TEST dd_flag_noatime
00:16:02.355  ************************************
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x
00:16:02.355    13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733925163
00:16:02.355    13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733925164
00:16:02.355   13:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1
00:16:03.291   13:52:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:03.291  [2024-12-11 13:52:46.011544] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:03.291  [2024-12-11 13:52:46.011765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77383 ]
00:16:03.551  [2024-12-11 13:52:46.216573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:03.810  [2024-12-11 13:52:46.411737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:04.069  
[2024-12-11T13:52:48.220Z] Copying: 512/512 [B] (average 500 kBps)
00:16:05.448  
00:16:05.448    13:52:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:05.448   13:52:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733925163 ))
00:16:05.448    13:52:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:05.448   13:52:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733925164 ))
00:16:05.448   13:52:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:05.448  [2024-12-11 13:52:48.198585] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:05.448  [2024-12-11 13:52:48.198795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77408 ]
00:16:05.707  [2024-12-11 13:52:48.394858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:05.966  [2024-12-11 13:52:48.546504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:06.225  
[2024-12-11T13:52:50.377Z] Copying: 512/512 [B] (average 500 kBps)
00:16:07.605  
00:16:07.605    13:52:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733925168 ))
00:16:07.605  ************************************
00:16:07.605  END TEST dd_flag_noatime
00:16:07.605  ************************************
00:16:07.605  
00:16:07.605  real	0m5.333s
00:16:07.605  user	0m3.323s
00:16:07.605  sys	0m0.788s
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:07.605  ************************************
00:16:07.605  START TEST dd_flags_misc
00:16:07.605  ************************************
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:07.605   13:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:16:07.864  [2024-12-11 13:52:50.390205] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:07.864  [2024-12-11 13:52:50.390399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77452 ]
00:16:07.864  [2024-12-11 13:52:50.585585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:08.124  [2024-12-11 13:52:50.738979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:08.384  
[2024-12-11T13:52:52.536Z] Copying: 512/512 [B] (average 500 kBps)
00:16:09.764  
00:16:09.764   13:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l06gd19zglouq4c653ohu2xxhbrvgkocf71aakld2bkxquv96xpt2xus0ll4uhbgs6cerxn6ypoummtu7jgrj31i37oet1sk24tprhxrtm8zbv4uxtigl7lsia2nbjfphlxryez6uu8050zvc6t5y7twl18ypu8xz2065evmjbnxv7i98qu896zongde8dt6sirwkzxb3pn8iddubiyyg9evnahytneb579q26b085nzqjds3ql8f4e6c3wvmjz7pkgwc49nouk81ho16nod2uv868zjg7k255cn038frlq9p9k6xdo8xoxju14f6xr0pi870pd87qoq1pa15iny4erqw9qn6fu1s03u0cyinhaxsswcnven7sf4w490yij2pdtz3h2cbue2m4faui9wr3zr7aim90fqayr388npbvhlsa4r0yhdfvy1r4ywz6cn28weeqmgpil9xz85beyfx1zasy934y5onlv19326fc1qnk58dev9iyopshau9skg == \l\0\6\g\d\1\9\z\g\l\o\u\q\4\c\6\5\3\o\h\u\2\x\x\h\b\r\v\g\k\o\c\f\7\1\a\a\k\l\d\2\b\k\x\q\u\v\9\6\x\p\t\2\x\u\s\0\l\l\4\u\h\b\g\s\6\c\e\r\x\n\6\y\p\o\u\m\m\t\u\7\j\g\r\j\3\1\i\3\7\o\e\t\1\s\k\2\4\t\p\r\h\x\r\t\m\8\z\b\v\4\u\x\t\i\g\l\7\l\s\i\a\2\n\b\j\f\p\h\l\x\r\y\e\z\6\u\u\8\0\5\0\z\v\c\6\t\5\y\7\t\w\l\1\8\y\p\u\8\x\z\2\0\6\5\e\v\m\j\b\n\x\v\7\i\9\8\q\u\8\9\6\z\o\n\g\d\e\8\d\t\6\s\i\r\w\k\z\x\b\3\p\n\8\i\d\d\u\b\i\y\y\g\9\e\v\n\a\h\y\t\n\e\b\5\7\9\q\2\6\b\0\8\5\n\z\q\j\d\s\3\q\l\8\f\4\e\6\c\3\w\v\m\j\z\7\p\k\g\w\c\4\9\n\o\u\k\8\1\h\o\1\6\n\o\d\2\u\v\8\6\8\z\j\g\7\k\2\5\5\c\n\0\3\8\f\r\l\q\9\p\9\k\6\x\d\o\8\x\o\x\j\u\1\4\f\6\x\r\0\p\i\8\7\0\p\d\8\7\q\o\q\1\p\a\1\5\i\n\y\4\e\r\q\w\9\q\n\6\f\u\1\s\0\3\u\0\c\y\i\n\h\a\x\s\s\w\c\n\v\e\n\7\s\f\4\w\4\9\0\y\i\j\2\p\d\t\z\3\h\2\c\b\u\e\2\m\4\f\a\u\i\9\w\r\3\z\r\7\a\i\m\9\0\f\q\a\y\r\3\8\8\n\p\b\v\h\l\s\a\4\r\0\y\h\d\f\v\y\1\r\4\y\w\z\6\c\n\2\8\w\e\e\q\m\g\p\i\l\9\x\z\8\5\b\e\y\f\x\1\z\a\s\y\9\3\4\y\5\o\n\l\v\1\9\3\2\6\f\c\1\q\n\k\5\8\d\e\v\9\i\y\o\p\s\h\a\u\9\s\k\g ]]
00:16:09.764   13:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:09.764   13:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:16:09.764  [2024-12-11 13:52:52.495321] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:09.764  [2024-12-11 13:52:52.495523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77477 ]
00:16:10.023  [2024-12-11 13:52:52.689762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:10.282  [2024-12-11 13:52:52.845517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:10.541  
[2024-12-11T13:52:54.691Z] Copying: 512/512 [B] (average 500 kBps)
00:16:11.919  
00:16:11.919   13:52:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l06gd19zglouq4c653ohu2xxhbrvgkocf71aakld2bkxquv96xpt2xus0ll4uhbgs6cerxn6ypoummtu7jgrj31i37oet1sk24tprhxrtm8zbv4uxtigl7lsia2nbjfphlxryez6uu8050zvc6t5y7twl18ypu8xz2065evmjbnxv7i98qu896zongde8dt6sirwkzxb3pn8iddubiyyg9evnahytneb579q26b085nzqjds3ql8f4e6c3wvmjz7pkgwc49nouk81ho16nod2uv868zjg7k255cn038frlq9p9k6xdo8xoxju14f6xr0pi870pd87qoq1pa15iny4erqw9qn6fu1s03u0cyinhaxsswcnven7sf4w490yij2pdtz3h2cbue2m4faui9wr3zr7aim90fqayr388npbvhlsa4r0yhdfvy1r4ywz6cn28weeqmgpil9xz85beyfx1zasy934y5onlv19326fc1qnk58dev9iyopshau9skg == \l\0\6\g\d\1\9\z\g\l\o\u\q\4\c\6\5\3\o\h\u\2\x\x\h\b\r\v\g\k\o\c\f\7\1\a\a\k\l\d\2\b\k\x\q\u\v\9\6\x\p\t\2\x\u\s\0\l\l\4\u\h\b\g\s\6\c\e\r\x\n\6\y\p\o\u\m\m\t\u\7\j\g\r\j\3\1\i\3\7\o\e\t\1\s\k\2\4\t\p\r\h\x\r\t\m\8\z\b\v\4\u\x\t\i\g\l\7\l\s\i\a\2\n\b\j\f\p\h\l\x\r\y\e\z\6\u\u\8\0\5\0\z\v\c\6\t\5\y\7\t\w\l\1\8\y\p\u\8\x\z\2\0\6\5\e\v\m\j\b\n\x\v\7\i\9\8\q\u\8\9\6\z\o\n\g\d\e\8\d\t\6\s\i\r\w\k\z\x\b\3\p\n\8\i\d\d\u\b\i\y\y\g\9\e\v\n\a\h\y\t\n\e\b\5\7\9\q\2\6\b\0\8\5\n\z\q\j\d\s\3\q\l\8\f\4\e\6\c\3\w\v\m\j\z\7\p\k\g\w\c\4\9\n\o\u\k\8\1\h\o\1\6\n\o\d\2\u\v\8\6\8\z\j\g\7\k\2\5\5\c\n\0\3\8\f\r\l\q\9\p\9\k\6\x\d\o\8\x\o\x\j\u\1\4\f\6\x\r\0\p\i\8\7\0\p\d\8\7\q\o\q\1\p\a\1\5\i\n\y\4\e\r\q\w\9\q\n\6\f\u\1\s\0\3\u\0\c\y\i\n\h\a\x\s\s\w\c\n\v\e\n\7\s\f\4\w\4\9\0\y\i\j\2\p\d\t\z\3\h\2\c\b\u\e\2\m\4\f\a\u\i\9\w\r\3\z\r\7\a\i\m\9\0\f\q\a\y\r\3\8\8\n\p\b\v\h\l\s\a\4\r\0\y\h\d\f\v\y\1\r\4\y\w\z\6\c\n\2\8\w\e\e\q\m\g\p\i\l\9\x\z\8\5\b\e\y\f\x\1\z\a\s\y\9\3\4\y\5\o\n\l\v\1\9\3\2\6\f\c\1\q\n\k\5\8\d\e\v\9\i\y\o\p\s\h\a\u\9\s\k\g ]]
00:16:11.919   13:52:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:11.919   13:52:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:16:11.919  [2024-12-11 13:52:54.657268] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:11.919  [2024-12-11 13:52:54.657469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77502 ]
00:16:12.179  [2024-12-11 13:52:54.851967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:12.437  [2024-12-11 13:52:55.006844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:12.695  
[2024-12-11T13:52:56.844Z] Copying: 512/512 [B] (average 100 kBps)
00:16:14.072  
00:16:14.072   13:52:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l06gd19zglouq4c653ohu2xxhbrvgkocf71aakld2bkxquv96xpt2xus0ll4uhbgs6cerxn6ypoummtu7jgrj31i37oet1sk24tprhxrtm8zbv4uxtigl7lsia2nbjfphlxryez6uu8050zvc6t5y7twl18ypu8xz2065evmjbnxv7i98qu896zongde8dt6sirwkzxb3pn8iddubiyyg9evnahytneb579q26b085nzqjds3ql8f4e6c3wvmjz7pkgwc49nouk81ho16nod2uv868zjg7k255cn038frlq9p9k6xdo8xoxju14f6xr0pi870pd87qoq1pa15iny4erqw9qn6fu1s03u0cyinhaxsswcnven7sf4w490yij2pdtz3h2cbue2m4faui9wr3zr7aim90fqayr388npbvhlsa4r0yhdfvy1r4ywz6cn28weeqmgpil9xz85beyfx1zasy934y5onlv19326fc1qnk58dev9iyopshau9skg == \l\0\6\g\d\1\9\z\g\l\o\u\q\4\c\6\5\3\o\h\u\2\x\x\h\b\r\v\g\k\o\c\f\7\1\a\a\k\l\d\2\b\k\x\q\u\v\9\6\x\p\t\2\x\u\s\0\l\l\4\u\h\b\g\s\6\c\e\r\x\n\6\y\p\o\u\m\m\t\u\7\j\g\r\j\3\1\i\3\7\o\e\t\1\s\k\2\4\t\p\r\h\x\r\t\m\8\z\b\v\4\u\x\t\i\g\l\7\l\s\i\a\2\n\b\j\f\p\h\l\x\r\y\e\z\6\u\u\8\0\5\0\z\v\c\6\t\5\y\7\t\w\l\1\8\y\p\u\8\x\z\2\0\6\5\e\v\m\j\b\n\x\v\7\i\9\8\q\u\8\9\6\z\o\n\g\d\e\8\d\t\6\s\i\r\w\k\z\x\b\3\p\n\8\i\d\d\u\b\i\y\y\g\9\e\v\n\a\h\y\t\n\e\b\5\7\9\q\2\6\b\0\8\5\n\z\q\j\d\s\3\q\l\8\f\4\e\6\c\3\w\v\m\j\z\7\p\k\g\w\c\4\9\n\o\u\k\8\1\h\o\1\6\n\o\d\2\u\v\8\6\8\z\j\g\7\k\2\5\5\c\n\0\3\8\f\r\l\q\9\p\9\k\6\x\d\o\8\x\o\x\j\u\1\4\f\6\x\r\0\p\i\8\7\0\p\d\8\7\q\o\q\1\p\a\1\5\i\n\y\4\e\r\q\w\9\q\n\6\f\u\1\s\0\3\u\0\c\y\i\n\h\a\x\s\s\w\c\n\v\e\n\7\s\f\4\w\4\9\0\y\i\j\2\p\d\t\z\3\h\2\c\b\u\e\2\m\4\f\a\u\i\9\w\r\3\z\r\7\a\i\m\9\0\f\q\a\y\r\3\8\8\n\p\b\v\h\l\s\a\4\r\0\y\h\d\f\v\y\1\r\4\y\w\z\6\c\n\2\8\w\e\e\q\m\g\p\i\l\9\x\z\8\5\b\e\y\f\x\1\z\a\s\y\9\3\4\y\5\o\n\l\v\1\9\3\2\6\f\c\1\q\n\k\5\8\d\e\v\9\i\y\o\p\s\h\a\u\9\s\k\g ]]
00:16:14.072   13:52:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:14.072   13:52:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:16:14.072  [2024-12-11 13:52:56.701930] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:14.072  [2024-12-11 13:52:56.702123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77527 ]
00:16:14.331  [2024-12-11 13:52:56.897069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:14.331  [2024-12-11 13:52:57.027851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:14.897  
[2024-12-11T13:52:58.602Z] Copying: 512/512 [B] (average 125 kBps)
00:16:15.830  
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l06gd19zglouq4c653ohu2xxhbrvgkocf71aakld2bkxquv96xpt2xus0ll4uhbgs6cerxn6ypoummtu7jgrj31i37oet1sk24tprhxrtm8zbv4uxtigl7lsia2nbjfphlxryez6uu8050zvc6t5y7twl18ypu8xz2065evmjbnxv7i98qu896zongde8dt6sirwkzxb3pn8iddubiyyg9evnahytneb579q26b085nzqjds3ql8f4e6c3wvmjz7pkgwc49nouk81ho16nod2uv868zjg7k255cn038frlq9p9k6xdo8xoxju14f6xr0pi870pd87qoq1pa15iny4erqw9qn6fu1s03u0cyinhaxsswcnven7sf4w490yij2pdtz3h2cbue2m4faui9wr3zr7aim90fqayr388npbvhlsa4r0yhdfvy1r4ywz6cn28weeqmgpil9xz85beyfx1zasy934y5onlv19326fc1qnk58dev9iyopshau9skg == \l\0\6\g\d\1\9\z\g\l\o\u\q\4\c\6\5\3\o\h\u\2\x\x\h\b\r\v\g\k\o\c\f\7\1\a\a\k\l\d\2\b\k\x\q\u\v\9\6\x\p\t\2\x\u\s\0\l\l\4\u\h\b\g\s\6\c\e\r\x\n\6\y\p\o\u\m\m\t\u\7\j\g\r\j\3\1\i\3\7\o\e\t\1\s\k\2\4\t\p\r\h\x\r\t\m\8\z\b\v\4\u\x\t\i\g\l\7\l\s\i\a\2\n\b\j\f\p\h\l\x\r\y\e\z\6\u\u\8\0\5\0\z\v\c\6\t\5\y\7\t\w\l\1\8\y\p\u\8\x\z\2\0\6\5\e\v\m\j\b\n\x\v\7\i\9\8\q\u\8\9\6\z\o\n\g\d\e\8\d\t\6\s\i\r\w\k\z\x\b\3\p\n\8\i\d\d\u\b\i\y\y\g\9\e\v\n\a\h\y\t\n\e\b\5\7\9\q\2\6\b\0\8\5\n\z\q\j\d\s\3\q\l\8\f\4\e\6\c\3\w\v\m\j\z\7\p\k\g\w\c\4\9\n\o\u\k\8\1\h\o\1\6\n\o\d\2\u\v\8\6\8\z\j\g\7\k\2\5\5\c\n\0\3\8\f\r\l\q\9\p\9\k\6\x\d\o\8\x\o\x\j\u\1\4\f\6\x\r\0\p\i\8\7\0\p\d\8\7\q\o\q\1\p\a\1\5\i\n\y\4\e\r\q\w\9\q\n\6\f\u\1\s\0\3\u\0\c\y\i\n\h\a\x\s\s\w\c\n\v\e\n\7\s\f\4\w\4\9\0\y\i\j\2\p\d\t\z\3\h\2\c\b\u\e\2\m\4\f\a\u\i\9\w\r\3\z\r\7\a\i\m\9\0\f\q\a\y\r\3\8\8\n\p\b\v\h\l\s\a\4\r\0\y\h\d\f\v\y\1\r\4\y\w\z\6\c\n\2\8\w\e\e\q\m\g\p\i\l\9\x\z\8\5\b\e\y\f\x\1\z\a\s\y\9\3\4\y\5\o\n\l\v\1\9\3\2\6\f\c\1\q\n\k\5\8\d\e\v\9\i\y\o\p\s\h\a\u\9\s\k\g ]]
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:15.831   13:52:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:16:16.089  [2024-12-11 13:52:58.620206] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:16.089  [2024-12-11 13:52:58.620453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77548 ]
00:16:16.089  [2024-12-11 13:52:58.816012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:16.347  [2024-12-11 13:52:58.945160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:16.605  
[2024-12-11T13:53:00.756Z] Copying: 512/512 [B] (average 500 kBps)
00:16:17.984  
00:16:17.984   13:53:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1d6lozzsg662ryizbzosj77dwq54v843gcxj6ohvxfg3tcanuebq9a2depwi4jknjzecl8c9l61uc83cv0srn3bbbvv9i2l1n232m00pp7fgqnpu69h2gzeevgr9upa90t61sdf0cb1el2r503rrqvuxjf5ke0otirvz8e2mog44tfs7iwxfff3zftm44bxh164gx1ckf6qjruydtu37slju9uvkda9r2gsy8mh329h8rgfkep2pgwafepfrer0u4unmguv3xb4vltolose86mk5x20dea9wpkflj6muwg70utnmxa8pfsous1xosdx7sieq9sjhlpdvglfjdnbuekukrzmivamaa6remkibnt7zvmcyl5hsb7zir0jgf67rr7il4mfh2tapyr1xdcgkgxgf69p2p23iy8py4t4i0lve4gp8rfuohydur4w2qmp5a8dpoakd5v17oa2z9eon9v28k9davarywd2nv3me4u0a9d8qfcwr6cejztf7urzp == \1\d\6\l\o\z\z\s\g\6\6\2\r\y\i\z\b\z\o\s\j\7\7\d\w\q\5\4\v\8\4\3\g\c\x\j\6\o\h\v\x\f\g\3\t\c\a\n\u\e\b\q\9\a\2\d\e\p\w\i\4\j\k\n\j\z\e\c\l\8\c\9\l\6\1\u\c\8\3\c\v\0\s\r\n\3\b\b\b\v\v\9\i\2\l\1\n\2\3\2\m\0\0\p\p\7\f\g\q\n\p\u\6\9\h\2\g\z\e\e\v\g\r\9\u\p\a\9\0\t\6\1\s\d\f\0\c\b\1\e\l\2\r\5\0\3\r\r\q\v\u\x\j\f\5\k\e\0\o\t\i\r\v\z\8\e\2\m\o\g\4\4\t\f\s\7\i\w\x\f\f\f\3\z\f\t\m\4\4\b\x\h\1\6\4\g\x\1\c\k\f\6\q\j\r\u\y\d\t\u\3\7\s\l\j\u\9\u\v\k\d\a\9\r\2\g\s\y\8\m\h\3\2\9\h\8\r\g\f\k\e\p\2\p\g\w\a\f\e\p\f\r\e\r\0\u\4\u\n\m\g\u\v\3\x\b\4\v\l\t\o\l\o\s\e\8\6\m\k\5\x\2\0\d\e\a\9\w\p\k\f\l\j\6\m\u\w\g\7\0\u\t\n\m\x\a\8\p\f\s\o\u\s\1\x\o\s\d\x\7\s\i\e\q\9\s\j\h\l\p\d\v\g\l\f\j\d\n\b\u\e\k\u\k\r\z\m\i\v\a\m\a\a\6\r\e\m\k\i\b\n\t\7\z\v\m\c\y\l\5\h\s\b\7\z\i\r\0\j\g\f\6\7\r\r\7\i\l\4\m\f\h\2\t\a\p\y\r\1\x\d\c\g\k\g\x\g\f\6\9\p\2\p\2\3\i\y\8\p\y\4\t\4\i\0\l\v\e\4\g\p\8\r\f\u\o\h\y\d\u\r\4\w\2\q\m\p\5\a\8\d\p\o\a\k\d\5\v\1\7\o\a\2\z\9\e\o\n\9\v\2\8\k\9\d\a\v\a\r\y\w\d\2\n\v\3\m\e\4\u\0\a\9\d\8\q\f\c\w\r\6\c\e\j\z\t\f\7\u\r\z\p ]]
00:16:17.984   13:53:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:17.984   13:53:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:16:17.984  [2024-12-11 13:53:00.540376] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:17.984  [2024-12-11 13:53:00.540577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77572 ]
00:16:17.984  [2024-12-11 13:53:00.731200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:18.243  [2024-12-11 13:53:00.857389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:18.501  
[2024-12-11T13:53:02.648Z] Copying: 512/512 [B] (average 500 kBps)
00:16:19.876  
00:16:19.876   13:53:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1d6lozzsg662ryizbzosj77dwq54v843gcxj6ohvxfg3tcanuebq9a2depwi4jknjzecl8c9l61uc83cv0srn3bbbvv9i2l1n232m00pp7fgqnpu69h2gzeevgr9upa90t61sdf0cb1el2r503rrqvuxjf5ke0otirvz8e2mog44tfs7iwxfff3zftm44bxh164gx1ckf6qjruydtu37slju9uvkda9r2gsy8mh329h8rgfkep2pgwafepfrer0u4unmguv3xb4vltolose86mk5x20dea9wpkflj6muwg70utnmxa8pfsous1xosdx7sieq9sjhlpdvglfjdnbuekukrzmivamaa6remkibnt7zvmcyl5hsb7zir0jgf67rr7il4mfh2tapyr1xdcgkgxgf69p2p23iy8py4t4i0lve4gp8rfuohydur4w2qmp5a8dpoakd5v17oa2z9eon9v28k9davarywd2nv3me4u0a9d8qfcwr6cejztf7urzp == \1\d\6\l\o\z\z\s\g\6\6\2\r\y\i\z\b\z\o\s\j\7\7\d\w\q\5\4\v\8\4\3\g\c\x\j\6\o\h\v\x\f\g\3\t\c\a\n\u\e\b\q\9\a\2\d\e\p\w\i\4\j\k\n\j\z\e\c\l\8\c\9\l\6\1\u\c\8\3\c\v\0\s\r\n\3\b\b\b\v\v\9\i\2\l\1\n\2\3\2\m\0\0\p\p\7\f\g\q\n\p\u\6\9\h\2\g\z\e\e\v\g\r\9\u\p\a\9\0\t\6\1\s\d\f\0\c\b\1\e\l\2\r\5\0\3\r\r\q\v\u\x\j\f\5\k\e\0\o\t\i\r\v\z\8\e\2\m\o\g\4\4\t\f\s\7\i\w\x\f\f\f\3\z\f\t\m\4\4\b\x\h\1\6\4\g\x\1\c\k\f\6\q\j\r\u\y\d\t\u\3\7\s\l\j\u\9\u\v\k\d\a\9\r\2\g\s\y\8\m\h\3\2\9\h\8\r\g\f\k\e\p\2\p\g\w\a\f\e\p\f\r\e\r\0\u\4\u\n\m\g\u\v\3\x\b\4\v\l\t\o\l\o\s\e\8\6\m\k\5\x\2\0\d\e\a\9\w\p\k\f\l\j\6\m\u\w\g\7\0\u\t\n\m\x\a\8\p\f\s\o\u\s\1\x\o\s\d\x\7\s\i\e\q\9\s\j\h\l\p\d\v\g\l\f\j\d\n\b\u\e\k\u\k\r\z\m\i\v\a\m\a\a\6\r\e\m\k\i\b\n\t\7\z\v\m\c\y\l\5\h\s\b\7\z\i\r\0\j\g\f\6\7\r\r\7\i\l\4\m\f\h\2\t\a\p\y\r\1\x\d\c\g\k\g\x\g\f\6\9\p\2\p\2\3\i\y\8\p\y\4\t\4\i\0\l\v\e\4\g\p\8\r\f\u\o\h\y\d\u\r\4\w\2\q\m\p\5\a\8\d\p\o\a\k\d\5\v\1\7\o\a\2\z\9\e\o\n\9\v\2\8\k\9\d\a\v\a\r\y\w\d\2\n\v\3\m\e\4\u\0\a\9\d\8\q\f\c\w\r\6\c\e\j\z\t\f\7\u\r\z\p ]]
00:16:19.876   13:53:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:19.876   13:53:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:16:19.876  [2024-12-11 13:53:02.433448] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:19.876  [2024-12-11 13:53:02.433667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77597 ]
00:16:19.876  [2024-12-11 13:53:02.626364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:20.134  [2024-12-11 13:53:02.753827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:20.393  
[2024-12-11T13:53:04.541Z] Copying: 512/512 [B] (average 83 kBps)
00:16:21.769  
00:16:21.769   13:53:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1d6lozzsg662ryizbzosj77dwq54v843gcxj6ohvxfg3tcanuebq9a2depwi4jknjzecl8c9l61uc83cv0srn3bbbvv9i2l1n232m00pp7fgqnpu69h2gzeevgr9upa90t61sdf0cb1el2r503rrqvuxjf5ke0otirvz8e2mog44tfs7iwxfff3zftm44bxh164gx1ckf6qjruydtu37slju9uvkda9r2gsy8mh329h8rgfkep2pgwafepfrer0u4unmguv3xb4vltolose86mk5x20dea9wpkflj6muwg70utnmxa8pfsous1xosdx7sieq9sjhlpdvglfjdnbuekukrzmivamaa6remkibnt7zvmcyl5hsb7zir0jgf67rr7il4mfh2tapyr1xdcgkgxgf69p2p23iy8py4t4i0lve4gp8rfuohydur4w2qmp5a8dpoakd5v17oa2z9eon9v28k9davarywd2nv3me4u0a9d8qfcwr6cejztf7urzp == \1\d\6\l\o\z\z\s\g\6\6\2\r\y\i\z\b\z\o\s\j\7\7\d\w\q\5\4\v\8\4\3\g\c\x\j\6\o\h\v\x\f\g\3\t\c\a\n\u\e\b\q\9\a\2\d\e\p\w\i\4\j\k\n\j\z\e\c\l\8\c\9\l\6\1\u\c\8\3\c\v\0\s\r\n\3\b\b\b\v\v\9\i\2\l\1\n\2\3\2\m\0\0\p\p\7\f\g\q\n\p\u\6\9\h\2\g\z\e\e\v\g\r\9\u\p\a\9\0\t\6\1\s\d\f\0\c\b\1\e\l\2\r\5\0\3\r\r\q\v\u\x\j\f\5\k\e\0\o\t\i\r\v\z\8\e\2\m\o\g\4\4\t\f\s\7\i\w\x\f\f\f\3\z\f\t\m\4\4\b\x\h\1\6\4\g\x\1\c\k\f\6\q\j\r\u\y\d\t\u\3\7\s\l\j\u\9\u\v\k\d\a\9\r\2\g\s\y\8\m\h\3\2\9\h\8\r\g\f\k\e\p\2\p\g\w\a\f\e\p\f\r\e\r\0\u\4\u\n\m\g\u\v\3\x\b\4\v\l\t\o\l\o\s\e\8\6\m\k\5\x\2\0\d\e\a\9\w\p\k\f\l\j\6\m\u\w\g\7\0\u\t\n\m\x\a\8\p\f\s\o\u\s\1\x\o\s\d\x\7\s\i\e\q\9\s\j\h\l\p\d\v\g\l\f\j\d\n\b\u\e\k\u\k\r\z\m\i\v\a\m\a\a\6\r\e\m\k\i\b\n\t\7\z\v\m\c\y\l\5\h\s\b\7\z\i\r\0\j\g\f\6\7\r\r\7\i\l\4\m\f\h\2\t\a\p\y\r\1\x\d\c\g\k\g\x\g\f\6\9\p\2\p\2\3\i\y\8\p\y\4\t\4\i\0\l\v\e\4\g\p\8\r\f\u\o\h\y\d\u\r\4\w\2\q\m\p\5\a\8\d\p\o\a\k\d\5\v\1\7\o\a\2\z\9\e\o\n\9\v\2\8\k\9\d\a\v\a\r\y\w\d\2\n\v\3\m\e\4\u\0\a\9\d\8\q\f\c\w\r\6\c\e\j\z\t\f\7\u\r\z\p ]]
00:16:21.769   13:53:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:21.769   13:53:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:16:21.769  [2024-12-11 13:53:04.363080] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:21.769  [2024-12-11 13:53:04.363336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77621 ]
00:16:22.027  [2024-12-11 13:53:04.559385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:22.027  [2024-12-11 13:53:04.685705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:22.285  
[2024-12-11T13:53:06.490Z] Copying: 512/512 [B] (average 125 kBps)
00:16:23.718  
00:16:23.718  ************************************
00:16:23.718  END TEST dd_flags_misc
00:16:23.718  ************************************
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 1d6lozzsg662ryizbzosj77dwq54v843gcxj6ohvxfg3tcanuebq9a2depwi4jknjzecl8c9l61uc83cv0srn3bbbvv9i2l1n232m00pp7fgqnpu69h2gzeevgr9upa90t61sdf0cb1el2r503rrqvuxjf5ke0otirvz8e2mog44tfs7iwxfff3zftm44bxh164gx1ckf6qjruydtu37slju9uvkda9r2gsy8mh329h8rgfkep2pgwafepfrer0u4unmguv3xb4vltolose86mk5x20dea9wpkflj6muwg70utnmxa8pfsous1xosdx7sieq9sjhlpdvglfjdnbuekukrzmivamaa6remkibnt7zvmcyl5hsb7zir0jgf67rr7il4mfh2tapyr1xdcgkgxgf69p2p23iy8py4t4i0lve4gp8rfuohydur4w2qmp5a8dpoakd5v17oa2z9eon9v28k9davarywd2nv3me4u0a9d8qfcwr6cejztf7urzp == \1\d\6\l\o\z\z\s\g\6\6\2\r\y\i\z\b\z\o\s\j\7\7\d\w\q\5\4\v\8\4\3\g\c\x\j\6\o\h\v\x\f\g\3\t\c\a\n\u\e\b\q\9\a\2\d\e\p\w\i\4\j\k\n\j\z\e\c\l\8\c\9\l\6\1\u\c\8\3\c\v\0\s\r\n\3\b\b\b\v\v\9\i\2\l\1\n\2\3\2\m\0\0\p\p\7\f\g\q\n\p\u\6\9\h\2\g\z\e\e\v\g\r\9\u\p\a\9\0\t\6\1\s\d\f\0\c\b\1\e\l\2\r\5\0\3\r\r\q\v\u\x\j\f\5\k\e\0\o\t\i\r\v\z\8\e\2\m\o\g\4\4\t\f\s\7\i\w\x\f\f\f\3\z\f\t\m\4\4\b\x\h\1\6\4\g\x\1\c\k\f\6\q\j\r\u\y\d\t\u\3\7\s\l\j\u\9\u\v\k\d\a\9\r\2\g\s\y\8\m\h\3\2\9\h\8\r\g\f\k\e\p\2\p\g\w\a\f\e\p\f\r\e\r\0\u\4\u\n\m\g\u\v\3\x\b\4\v\l\t\o\l\o\s\e\8\6\m\k\5\x\2\0\d\e\a\9\w\p\k\f\l\j\6\m\u\w\g\7\0\u\t\n\m\x\a\8\p\f\s\o\u\s\1\x\o\s\d\x\7\s\i\e\q\9\s\j\h\l\p\d\v\g\l\f\j\d\n\b\u\e\k\u\k\r\z\m\i\v\a\m\a\a\6\r\e\m\k\i\b\n\t\7\z\v\m\c\y\l\5\h\s\b\7\z\i\r\0\j\g\f\6\7\r\r\7\i\l\4\m\f\h\2\t\a\p\y\r\1\x\d\c\g\k\g\x\g\f\6\9\p\2\p\2\3\i\y\8\p\y\4\t\4\i\0\l\v\e\4\g\p\8\r\f\u\o\h\y\d\u\r\4\w\2\q\m\p\5\a\8\d\p\o\a\k\d\5\v\1\7\o\a\2\z\9\e\o\n\9\v\2\8\k\9\d\a\v\a\r\y\w\d\2\n\v\3\m\e\4\u\0\a\9\d\8\q\f\c\w\r\6\c\e\j\z\t\f\7\u\r\z\p ]]
00:16:23.718  
00:16:23.718  real	0m15.906s
00:16:23.718  user	0m12.579s
00:16:23.718  sys	0m2.404s
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO'
00:16:23.718  * Second test run, disabling liburing, forcing AIO
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio")
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:23.718  ************************************
00:16:23.718  START TEST dd_flag_append_forced_aio
00:16:23.718  ************************************
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1
00:16:23.718    13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32
00:16:23.718    13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:16:23.718    13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=e9ve07qu68opftrgksciqiuv9m4madr2
00:16:23.718    13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32
00:16:23.718    13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:16:23.718    13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=t2uk748dh47i8uw2cne5n9f6pjqqbggz
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s e9ve07qu68opftrgksciqiuv9m4madr2
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s t2uk748dh47i8uw2cne5n9f6pjqqbggz
00:16:23.718   13:53:06 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:16:23.718  [2024-12-11 13:53:06.359683] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:23.718  [2024-12-11 13:53:06.359895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77661 ]
00:16:23.977  [2024-12-11 13:53:06.555801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:23.977  [2024-12-11 13:53:06.686372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:24.542  
[2024-12-11T13:53:08.249Z] Copying: 32/32 [B] (average 31 kBps)
00:16:25.477  
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ t2uk748dh47i8uw2cne5n9f6pjqqbggze9ve07qu68opftrgksciqiuv9m4madr2 == \t\2\u\k\7\4\8\d\h\4\7\i\8\u\w\2\c\n\e\5\n\9\f\6\p\j\q\q\b\g\g\z\e\9\v\e\0\7\q\u\6\8\o\p\f\t\r\g\k\s\c\i\q\i\u\v\9\m\4\m\a\d\r\2 ]]
00:16:25.477  
00:16:25.477  real	0m1.927s
00:16:25.477  user	0m1.535s
00:16:25.477  sys	0m0.278s
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:25.477  ************************************
00:16:25.477  END TEST dd_flag_append_forced_aio
00:16:25.477  ************************************
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:25.477  ************************************
00:16:25.477  START TEST dd_flag_directory_forced_aio
00:16:25.477  ************************************
00:16:25.477   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:25.736    13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:25.736    13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:16:25.736   13:53:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:25.736  [2024-12-11 13:53:08.331999] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:25.736  [2024-12-11 13:53:08.332156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77700 ]
00:16:25.736  [2024-12-11 13:53:08.508006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:25.994  [2024-12-11 13:53:08.632831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:26.253  [2024-12-11 13:53:08.989575] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:16:26.253  [2024-12-11 13:53:08.989651] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:16:26.253  [2024-12-11 13:53:08.989676] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:16:27.185  [2024-12-11 13:53:09.850663] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:27.444    13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:27.444    13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:27.444   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:16:27.445   13:53:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:16:27.445  [2024-12-11 13:53:10.201222] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:27.445  [2024-12-11 13:53:10.201362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77728 ]
00:16:27.703  [2024-12-11 13:53:10.374993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:27.961  [2024-12-11 13:53:10.501166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:28.220  [2024-12-11 13:53:10.851273] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:16:28.220  [2024-12-11 13:53:10.851357] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:16:28.220  [2024-12-11 13:53:10.851383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:16:29.220  [2024-12-11 13:53:11.739982] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:29.220  
00:16:29.220  real	0m3.739s
00:16:29.220  user	0m3.047s
00:16:29.220  sys	0m0.491s
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:29.220   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:29.220  ************************************
00:16:29.220  END TEST dd_flag_directory_forced_aio
00:16:29.220  ************************************
00:16:29.479   13:53:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow
00:16:29.479   13:53:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:29.479   13:53:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:29.479   13:53:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:29.479  ************************************
00:16:29.479  START TEST dd_flag_nofollow_forced_aio
00:16:29.479  ************************************
00:16:29.479   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:29.480    13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:29.480    13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:16:29.480   13:53:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:29.480  [2024-12-11 13:53:12.161599] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:29.480  [2024-12-11 13:53:12.161808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77770 ]
00:16:29.738  [2024-12-11 13:53:12.356963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:29.738  [2024-12-11 13:53:12.490016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:30.306  [2024-12-11 13:53:12.845372] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:16:30.306  [2024-12-11 13:53:12.845432] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:16:30.306  [2024-12-11 13:53:12.845456] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:16:31.242  [2024-12-11 13:53:13.715612] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:31.242    13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:31.242    13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:16:31.242   13:53:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:16:31.501  [2024-12-11 13:53:14.061668] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:31.501  [2024-12-11 13:53:14.061851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77797 ]
00:16:31.501  [2024-12-11 13:53:14.256265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:31.759  [2024-12-11 13:53:14.382926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:32.018  [2024-12-11 13:53:14.742535] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:16:32.018  [2024-12-11 13:53:14.742593] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:16:32.018  [2024-12-11 13:53:14.742618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:16:32.952  [2024-12-11 13:53:15.608184] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:33.210   13:53:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:33.210  [2024-12-11 13:53:15.969591] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:33.210  [2024-12-11 13:53:15.969800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77821 ]
00:16:33.469  [2024-12-11 13:53:16.163764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:33.727  [2024-12-11 13:53:16.292668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:33.986  
[2024-12-11T13:53:18.189Z] Copying: 512/512 [B] (average 500 kBps)
00:16:35.417  
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ apy2un5d3174wsye0voxc2hjqybcxdsx6fdet22mo0ekwh0pvn3chht8eoq2hyi5nrdzshy5aue4hnnt6v5bxz2ds4nyyd7a61ht9qdmit30fed6si27no3dmdyb9d4vfcsbu5328kf5p3k7otm0x6ybqmje6js2v9a5p3k6ku2absu425vovfe3huh6msnbxhqhm43y6i9g7v7kxx62m3cxk207a15rri8w16zh312gtv1xl726b53nfxrjaim69ofh7d3v64wqlzxumg265pn0jeo3arrfrhphdi1fi24os0853lm0sc0m4u120dqv0ny98lp4e8hjh0h7sdy2877nlc8nu39bv1yfzt6njwj6w2wysxx454qikrdwjm886e0j9k1ufe3xw0ou5ce330ogtkoqgvxbuktg2rlam4ypofgjzr0c66gvjwv0o5dxzhwh37klyvp4ucz962si779bc6s81yi6mu3docchwqg8pdp73yjpjp7ocamloqwp == \a\p\y\2\u\n\5\d\3\1\7\4\w\s\y\e\0\v\o\x\c\2\h\j\q\y\b\c\x\d\s\x\6\f\d\e\t\2\2\m\o\0\e\k\w\h\0\p\v\n\3\c\h\h\t\8\e\o\q\2\h\y\i\5\n\r\d\z\s\h\y\5\a\u\e\4\h\n\n\t\6\v\5\b\x\z\2\d\s\4\n\y\y\d\7\a\6\1\h\t\9\q\d\m\i\t\3\0\f\e\d\6\s\i\2\7\n\o\3\d\m\d\y\b\9\d\4\v\f\c\s\b\u\5\3\2\8\k\f\5\p\3\k\7\o\t\m\0\x\6\y\b\q\m\j\e\6\j\s\2\v\9\a\5\p\3\k\6\k\u\2\a\b\s\u\4\2\5\v\o\v\f\e\3\h\u\h\6\m\s\n\b\x\h\q\h\m\4\3\y\6\i\9\g\7\v\7\k\x\x\6\2\m\3\c\x\k\2\0\7\a\1\5\r\r\i\8\w\1\6\z\h\3\1\2\g\t\v\1\x\l\7\2\6\b\5\3\n\f\x\r\j\a\i\m\6\9\o\f\h\7\d\3\v\6\4\w\q\l\z\x\u\m\g\2\6\5\p\n\0\j\e\o\3\a\r\r\f\r\h\p\h\d\i\1\f\i\2\4\o\s\0\8\5\3\l\m\0\s\c\0\m\4\u\1\2\0\d\q\v\0\n\y\9\8\l\p\4\e\8\h\j\h\0\h\7\s\d\y\2\8\7\7\n\l\c\8\n\u\3\9\b\v\1\y\f\z\t\6\n\j\w\j\6\w\2\w\y\s\x\x\4\5\4\q\i\k\r\d\w\j\m\8\8\6\e\0\j\9\k\1\u\f\e\3\x\w\0\o\u\5\c\e\3\3\0\o\g\t\k\o\q\g\v\x\b\u\k\t\g\2\r\l\a\m\4\y\p\o\f\g\j\z\r\0\c\6\6\g\v\j\w\v\0\o\5\d\x\z\h\w\h\3\7\k\l\y\v\p\4\u\c\z\9\6\2\s\i\7\7\9\b\c\6\s\8\1\y\i\6\m\u\3\d\o\c\c\h\w\q\g\8\p\d\p\7\3\y\j\p\j\p\7\o\c\a\m\l\o\q\w\p ]]
00:16:35.417  
00:16:35.417  real	0m5.727s
00:16:35.417  user	0m4.587s
00:16:35.417  sys	0m0.825s
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:35.417  ************************************
00:16:35.417  END TEST dd_flag_nofollow_forced_aio
00:16:35.417  ************************************
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:35.417  ************************************
00:16:35.417  START TEST dd_flag_noatime_forced_aio
00:16:35.417  ************************************
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:16:35.417   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:35.417    13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:35.418   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733925196
00:16:35.418    13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:35.418   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733925197
00:16:35.418   13:53:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1
00:16:36.354   13:53:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:36.354  [2024-12-11 13:53:18.973339] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:36.354  [2024-12-11 13:53:18.973525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77876 ]
00:16:36.613  [2024-12-11 13:53:19.168168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:36.613  [2024-12-11 13:53:19.295615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:37.180  
[2024-12-11T13:53:20.887Z] Copying: 512/512 [B] (average 500 kBps)
00:16:38.115  
00:16:38.115    13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:38.115   13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733925196 ))
00:16:38.115    13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:38.115   13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733925197 ))
00:16:38.115   13:53:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:16:38.374  [2024-12-11 13:53:20.933300] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:38.374  [2024-12-11 13:53:20.933530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77905 ]
00:16:38.374  [2024-12-11 13:53:21.128146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:38.632  [2024-12-11 13:53:21.256302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:38.891  
[2024-12-11T13:53:22.756Z] Copying: 512/512 [B] (average 500 kBps)
00:16:39.984  
00:16:40.243    13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733925201 ))
00:16:40.243  
00:16:40.243  real	0m4.922s
00:16:40.243  user	0m3.096s
00:16:40.243  sys	0m0.590s
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:40.243  ************************************
00:16:40.243  END TEST dd_flag_noatime_forced_aio
00:16:40.243  ************************************
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:40.243  ************************************
00:16:40.243  START TEST dd_flags_misc_forced_aio
00:16:40.243  ************************************
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:40.243   13:53:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:16:40.243  [2024-12-11 13:53:22.924864] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:40.243  [2024-12-11 13:53:22.925059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77948 ]
00:16:40.502  [2024-12-11 13:53:23.116572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:40.502  [2024-12-11 13:53:23.245898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:41.069  
[2024-12-11T13:53:24.778Z] Copying: 512/512 [B] (average 500 kBps)
00:16:42.006  
00:16:42.006   13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8l6yvguiyoyoijkxulelaq3ugm6mbmri3f6c8ib9u8zfg7k6jgxsgy1w2kosc3onj5ln34ndehg5x0wi09mn4n9xtlfl14obtiu7c9raehsiuvtbw5wfnfvb9tcxhm6h4hol3w0h0p1szr7sghldetaf97znvfde5wd2gkp06s3zbhzyvx87qm9woyb3akmh56xn7ocelj5yf883afq16slkoeif4midozuakcucryguijo9pc1jhqx1d7qbej2tdqxtdqv7u2067o9k8pr2lu35ks5sdkucwjltzushdg4cfmuceoxksb04i3vzll0lfy91d5ovr3vznbifn7klom65pzy132ricwpyy7xa1fwtwnlttxt4r6j52zwddjy8kf5qdz63jypfeg4zsaeoamq6no2eirsvrujxn39gzwohc6ed9ftjv228iz56ue59tn7d20vpzb6dl8t4fpnmr3ufv9kuoagspjrnkx6o6m8pg5kby7cc331hpntcupnh == \8\l\6\y\v\g\u\i\y\o\y\o\i\j\k\x\u\l\e\l\a\q\3\u\g\m\6\m\b\m\r\i\3\f\6\c\8\i\b\9\u\8\z\f\g\7\k\6\j\g\x\s\g\y\1\w\2\k\o\s\c\3\o\n\j\5\l\n\3\4\n\d\e\h\g\5\x\0\w\i\0\9\m\n\4\n\9\x\t\l\f\l\1\4\o\b\t\i\u\7\c\9\r\a\e\h\s\i\u\v\t\b\w\5\w\f\n\f\v\b\9\t\c\x\h\m\6\h\4\h\o\l\3\w\0\h\0\p\1\s\z\r\7\s\g\h\l\d\e\t\a\f\9\7\z\n\v\f\d\e\5\w\d\2\g\k\p\0\6\s\3\z\b\h\z\y\v\x\8\7\q\m\9\w\o\y\b\3\a\k\m\h\5\6\x\n\7\o\c\e\l\j\5\y\f\8\8\3\a\f\q\1\6\s\l\k\o\e\i\f\4\m\i\d\o\z\u\a\k\c\u\c\r\y\g\u\i\j\o\9\p\c\1\j\h\q\x\1\d\7\q\b\e\j\2\t\d\q\x\t\d\q\v\7\u\2\0\6\7\o\9\k\8\p\r\2\l\u\3\5\k\s\5\s\d\k\u\c\w\j\l\t\z\u\s\h\d\g\4\c\f\m\u\c\e\o\x\k\s\b\0\4\i\3\v\z\l\l\0\l\f\y\9\1\d\5\o\v\r\3\v\z\n\b\i\f\n\7\k\l\o\m\6\5\p\z\y\1\3\2\r\i\c\w\p\y\y\7\x\a\1\f\w\t\w\n\l\t\t\x\t\4\r\6\j\5\2\z\w\d\d\j\y\8\k\f\5\q\d\z\6\3\j\y\p\f\e\g\4\z\s\a\e\o\a\m\q\6\n\o\2\e\i\r\s\v\r\u\j\x\n\3\9\g\z\w\o\h\c\6\e\d\9\f\t\j\v\2\2\8\i\z\5\6\u\e\5\9\t\n\7\d\2\0\v\p\z\b\6\d\l\8\t\4\f\p\n\m\r\3\u\f\v\9\k\u\o\a\g\s\p\j\r\n\k\x\6\o\6\m\8\p\g\5\k\b\y\7\c\c\3\3\1\h\p\n\t\c\u\p\n\h ]]
00:16:42.006   13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:42.006   13:53:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:16:42.265  [2024-12-11 13:53:24.847278] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:42.265  [2024-12-11 13:53:24.847475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77969 ]
00:16:42.265  [2024-12-11 13:53:25.043729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:42.525  [2024-12-11 13:53:25.171769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:42.786  
[2024-12-11T13:53:26.934Z] Copying: 512/512 [B] (average 500 kBps)
00:16:44.162  
00:16:44.162   13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8l6yvguiyoyoijkxulelaq3ugm6mbmri3f6c8ib9u8zfg7k6jgxsgy1w2kosc3onj5ln34ndehg5x0wi09mn4n9xtlfl14obtiu7c9raehsiuvtbw5wfnfvb9tcxhm6h4hol3w0h0p1szr7sghldetaf97znvfde5wd2gkp06s3zbhzyvx87qm9woyb3akmh56xn7ocelj5yf883afq16slkoeif4midozuakcucryguijo9pc1jhqx1d7qbej2tdqxtdqv7u2067o9k8pr2lu35ks5sdkucwjltzushdg4cfmuceoxksb04i3vzll0lfy91d5ovr3vznbifn7klom65pzy132ricwpyy7xa1fwtwnlttxt4r6j52zwddjy8kf5qdz63jypfeg4zsaeoamq6no2eirsvrujxn39gzwohc6ed9ftjv228iz56ue59tn7d20vpzb6dl8t4fpnmr3ufv9kuoagspjrnkx6o6m8pg5kby7cc331hpntcupnh == \8\l\6\y\v\g\u\i\y\o\y\o\i\j\k\x\u\l\e\l\a\q\3\u\g\m\6\m\b\m\r\i\3\f\6\c\8\i\b\9\u\8\z\f\g\7\k\6\j\g\x\s\g\y\1\w\2\k\o\s\c\3\o\n\j\5\l\n\3\4\n\d\e\h\g\5\x\0\w\i\0\9\m\n\4\n\9\x\t\l\f\l\1\4\o\b\t\i\u\7\c\9\r\a\e\h\s\i\u\v\t\b\w\5\w\f\n\f\v\b\9\t\c\x\h\m\6\h\4\h\o\l\3\w\0\h\0\p\1\s\z\r\7\s\g\h\l\d\e\t\a\f\9\7\z\n\v\f\d\e\5\w\d\2\g\k\p\0\6\s\3\z\b\h\z\y\v\x\8\7\q\m\9\w\o\y\b\3\a\k\m\h\5\6\x\n\7\o\c\e\l\j\5\y\f\8\8\3\a\f\q\1\6\s\l\k\o\e\i\f\4\m\i\d\o\z\u\a\k\c\u\c\r\y\g\u\i\j\o\9\p\c\1\j\h\q\x\1\d\7\q\b\e\j\2\t\d\q\x\t\d\q\v\7\u\2\0\6\7\o\9\k\8\p\r\2\l\u\3\5\k\s\5\s\d\k\u\c\w\j\l\t\z\u\s\h\d\g\4\c\f\m\u\c\e\o\x\k\s\b\0\4\i\3\v\z\l\l\0\l\f\y\9\1\d\5\o\v\r\3\v\z\n\b\i\f\n\7\k\l\o\m\6\5\p\z\y\1\3\2\r\i\c\w\p\y\y\7\x\a\1\f\w\t\w\n\l\t\t\x\t\4\r\6\j\5\2\z\w\d\d\j\y\8\k\f\5\q\d\z\6\3\j\y\p\f\e\g\4\z\s\a\e\o\a\m\q\6\n\o\2\e\i\r\s\v\r\u\j\x\n\3\9\g\z\w\o\h\c\6\e\d\9\f\t\j\v\2\2\8\i\z\5\6\u\e\5\9\t\n\7\d\2\0\v\p\z\b\6\d\l\8\t\4\f\p\n\m\r\3\u\f\v\9\k\u\o\a\g\s\p\j\r\n\k\x\6\o\6\m\8\p\g\5\k\b\y\7\c\c\3\3\1\h\p\n\t\c\u\p\n\h ]]
00:16:44.162   13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:44.162   13:53:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:16:44.162  [2024-12-11 13:53:26.770483] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:44.162  [2024-12-11 13:53:26.770706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77988 ]
00:16:44.421  [2024-12-11 13:53:26.964492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:44.421  [2024-12-11 13:53:27.090946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:44.679  
[2024-12-11T13:53:28.829Z] Copying: 512/512 [B] (average 71 kBps)
00:16:46.057  
00:16:46.057   13:53:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8l6yvguiyoyoijkxulelaq3ugm6mbmri3f6c8ib9u8zfg7k6jgxsgy1w2kosc3onj5ln34ndehg5x0wi09mn4n9xtlfl14obtiu7c9raehsiuvtbw5wfnfvb9tcxhm6h4hol3w0h0p1szr7sghldetaf97znvfde5wd2gkp06s3zbhzyvx87qm9woyb3akmh56xn7ocelj5yf883afq16slkoeif4midozuakcucryguijo9pc1jhqx1d7qbej2tdqxtdqv7u2067o9k8pr2lu35ks5sdkucwjltzushdg4cfmuceoxksb04i3vzll0lfy91d5ovr3vznbifn7klom65pzy132ricwpyy7xa1fwtwnlttxt4r6j52zwddjy8kf5qdz63jypfeg4zsaeoamq6no2eirsvrujxn39gzwohc6ed9ftjv228iz56ue59tn7d20vpzb6dl8t4fpnmr3ufv9kuoagspjrnkx6o6m8pg5kby7cc331hpntcupnh == \8\l\6\y\v\g\u\i\y\o\y\o\i\j\k\x\u\l\e\l\a\q\3\u\g\m\6\m\b\m\r\i\3\f\6\c\8\i\b\9\u\8\z\f\g\7\k\6\j\g\x\s\g\y\1\w\2\k\o\s\c\3\o\n\j\5\l\n\3\4\n\d\e\h\g\5\x\0\w\i\0\9\m\n\4\n\9\x\t\l\f\l\1\4\o\b\t\i\u\7\c\9\r\a\e\h\s\i\u\v\t\b\w\5\w\f\n\f\v\b\9\t\c\x\h\m\6\h\4\h\o\l\3\w\0\h\0\p\1\s\z\r\7\s\g\h\l\d\e\t\a\f\9\7\z\n\v\f\d\e\5\w\d\2\g\k\p\0\6\s\3\z\b\h\z\y\v\x\8\7\q\m\9\w\o\y\b\3\a\k\m\h\5\6\x\n\7\o\c\e\l\j\5\y\f\8\8\3\a\f\q\1\6\s\l\k\o\e\i\f\4\m\i\d\o\z\u\a\k\c\u\c\r\y\g\u\i\j\o\9\p\c\1\j\h\q\x\1\d\7\q\b\e\j\2\t\d\q\x\t\d\q\v\7\u\2\0\6\7\o\9\k\8\p\r\2\l\u\3\5\k\s\5\s\d\k\u\c\w\j\l\t\z\u\s\h\d\g\4\c\f\m\u\c\e\o\x\k\s\b\0\4\i\3\v\z\l\l\0\l\f\y\9\1\d\5\o\v\r\3\v\z\n\b\i\f\n\7\k\l\o\m\6\5\p\z\y\1\3\2\r\i\c\w\p\y\y\7\x\a\1\f\w\t\w\n\l\t\t\x\t\4\r\6\j\5\2\z\w\d\d\j\y\8\k\f\5\q\d\z\6\3\j\y\p\f\e\g\4\z\s\a\e\o\a\m\q\6\n\o\2\e\i\r\s\v\r\u\j\x\n\3\9\g\z\w\o\h\c\6\e\d\9\f\t\j\v\2\2\8\i\z\5\6\u\e\5\9\t\n\7\d\2\0\v\p\z\b\6\d\l\8\t\4\f\p\n\m\r\3\u\f\v\9\k\u\o\a\g\s\p\j\r\n\k\x\6\o\6\m\8\p\g\5\k\b\y\7\c\c\3\3\1\h\p\n\t\c\u\p\n\h ]]
00:16:46.057   13:53:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:46.057   13:53:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:16:46.057  [2024-12-11 13:53:28.683775] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:46.057  [2024-12-11 13:53:28.683971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78008 ]
00:16:46.316  [2024-12-11 13:53:28.885584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:46.316  [2024-12-11 13:53:29.062289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:46.883  
[2024-12-11T13:53:30.612Z] Copying: 512/512 [B] (average 125 kBps)
00:16:47.840  
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8l6yvguiyoyoijkxulelaq3ugm6mbmri3f6c8ib9u8zfg7k6jgxsgy1w2kosc3onj5ln34ndehg5x0wi09mn4n9xtlfl14obtiu7c9raehsiuvtbw5wfnfvb9tcxhm6h4hol3w0h0p1szr7sghldetaf97znvfde5wd2gkp06s3zbhzyvx87qm9woyb3akmh56xn7ocelj5yf883afq16slkoeif4midozuakcucryguijo9pc1jhqx1d7qbej2tdqxtdqv7u2067o9k8pr2lu35ks5sdkucwjltzushdg4cfmuceoxksb04i3vzll0lfy91d5ovr3vznbifn7klom65pzy132ricwpyy7xa1fwtwnlttxt4r6j52zwddjy8kf5qdz63jypfeg4zsaeoamq6no2eirsvrujxn39gzwohc6ed9ftjv228iz56ue59tn7d20vpzb6dl8t4fpnmr3ufv9kuoagspjrnkx6o6m8pg5kby7cc331hpntcupnh == \8\l\6\y\v\g\u\i\y\o\y\o\i\j\k\x\u\l\e\l\a\q\3\u\g\m\6\m\b\m\r\i\3\f\6\c\8\i\b\9\u\8\z\f\g\7\k\6\j\g\x\s\g\y\1\w\2\k\o\s\c\3\o\n\j\5\l\n\3\4\n\d\e\h\g\5\x\0\w\i\0\9\m\n\4\n\9\x\t\l\f\l\1\4\o\b\t\i\u\7\c\9\r\a\e\h\s\i\u\v\t\b\w\5\w\f\n\f\v\b\9\t\c\x\h\m\6\h\4\h\o\l\3\w\0\h\0\p\1\s\z\r\7\s\g\h\l\d\e\t\a\f\9\7\z\n\v\f\d\e\5\w\d\2\g\k\p\0\6\s\3\z\b\h\z\y\v\x\8\7\q\m\9\w\o\y\b\3\a\k\m\h\5\6\x\n\7\o\c\e\l\j\5\y\f\8\8\3\a\f\q\1\6\s\l\k\o\e\i\f\4\m\i\d\o\z\u\a\k\c\u\c\r\y\g\u\i\j\o\9\p\c\1\j\h\q\x\1\d\7\q\b\e\j\2\t\d\q\x\t\d\q\v\7\u\2\0\6\7\o\9\k\8\p\r\2\l\u\3\5\k\s\5\s\d\k\u\c\w\j\l\t\z\u\s\h\d\g\4\c\f\m\u\c\e\o\x\k\s\b\0\4\i\3\v\z\l\l\0\l\f\y\9\1\d\5\o\v\r\3\v\z\n\b\i\f\n\7\k\l\o\m\6\5\p\z\y\1\3\2\r\i\c\w\p\y\y\7\x\a\1\f\w\t\w\n\l\t\t\x\t\4\r\6\j\5\2\z\w\d\d\j\y\8\k\f\5\q\d\z\6\3\j\y\p\f\e\g\4\z\s\a\e\o\a\m\q\6\n\o\2\e\i\r\s\v\r\u\j\x\n\3\9\g\z\w\o\h\c\6\e\d\9\f\t\j\v\2\2\8\i\z\5\6\u\e\5\9\t\n\7\d\2\0\v\p\z\b\6\d\l\8\t\4\f\p\n\m\r\3\u\f\v\9\k\u\o\a\g\s\p\j\r\n\k\x\6\o\6\m\8\p\g\5\k\b\y\7\c\c\3\3\1\h\p\n\t\c\u\p\n\h ]]
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:47.840   13:53:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:16:48.099  [2024-12-11 13:53:30.687220] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:48.099  [2024-12-11 13:53:30.687423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78033 ]
00:16:48.099  [2024-12-11 13:53:30.880709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:48.358  [2024-12-11 13:53:31.011559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:48.617  
[2024-12-11T13:53:32.765Z] Copying: 512/512 [B] (average 500 kBps)
00:16:49.993  
00:16:49.993   13:53:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ urb6vfi0e1ji2nndgsdfw1if505sw1jom3z0nk9erosny10dk5mtlgwvxnv4qbvb8wc45ldbjrmuix0gq2ulxt2xgyu87ilc0xi6ga490titbbeyalu6kb3z564g5wjr2o3bg4pyu6fouhwogvd4fhv99agxh8kp1m8pw0hld15y7di012mmu5dpm23uhpd3rgmrl3q35eowqjpf5o1g31jemrc84fk7n1q2rufjzw54pkn3dwr3l27isj8zuq8sw0u9psyd2pwy7cgl2aczjgwswusqkfog7o9a9bx6i24kz6r8qi80istxmcmpdpospns59xawoud6gsm78ujr8s0e5ldev1kv2rdr0c8k3digfwxd86p71a5960v4ao7m0auia19esd5l660ht0k23wt0xwvr5es4mxbcl6eou17jsw12dh8lh4fl5ymz814dypz7c0gvr016a78nqhnz8yrov247xb04q9m7ekeafmm8x8bsv75wtjxlemfqtk9v == \u\r\b\6\v\f\i\0\e\1\j\i\2\n\n\d\g\s\d\f\w\1\i\f\5\0\5\s\w\1\j\o\m\3\z\0\n\k\9\e\r\o\s\n\y\1\0\d\k\5\m\t\l\g\w\v\x\n\v\4\q\b\v\b\8\w\c\4\5\l\d\b\j\r\m\u\i\x\0\g\q\2\u\l\x\t\2\x\g\y\u\8\7\i\l\c\0\x\i\6\g\a\4\9\0\t\i\t\b\b\e\y\a\l\u\6\k\b\3\z\5\6\4\g\5\w\j\r\2\o\3\b\g\4\p\y\u\6\f\o\u\h\w\o\g\v\d\4\f\h\v\9\9\a\g\x\h\8\k\p\1\m\8\p\w\0\h\l\d\1\5\y\7\d\i\0\1\2\m\m\u\5\d\p\m\2\3\u\h\p\d\3\r\g\m\r\l\3\q\3\5\e\o\w\q\j\p\f\5\o\1\g\3\1\j\e\m\r\c\8\4\f\k\7\n\1\q\2\r\u\f\j\z\w\5\4\p\k\n\3\d\w\r\3\l\2\7\i\s\j\8\z\u\q\8\s\w\0\u\9\p\s\y\d\2\p\w\y\7\c\g\l\2\a\c\z\j\g\w\s\w\u\s\q\k\f\o\g\7\o\9\a\9\b\x\6\i\2\4\k\z\6\r\8\q\i\8\0\i\s\t\x\m\c\m\p\d\p\o\s\p\n\s\5\9\x\a\w\o\u\d\6\g\s\m\7\8\u\j\r\8\s\0\e\5\l\d\e\v\1\k\v\2\r\d\r\0\c\8\k\3\d\i\g\f\w\x\d\8\6\p\7\1\a\5\9\6\0\v\4\a\o\7\m\0\a\u\i\a\1\9\e\s\d\5\l\6\6\0\h\t\0\k\2\3\w\t\0\x\w\v\r\5\e\s\4\m\x\b\c\l\6\e\o\u\1\7\j\s\w\1\2\d\h\8\l\h\4\f\l\5\y\m\z\8\1\4\d\y\p\z\7\c\0\g\v\r\0\1\6\a\7\8\n\q\h\n\z\8\y\r\o\v\2\4\7\x\b\0\4\q\9\m\7\e\k\e\a\f\m\m\8\x\8\b\s\v\7\5\w\t\j\x\l\e\m\f\q\t\k\9\v ]]
00:16:49.993   13:53:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:49.993   13:53:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:16:49.993  [2024-12-11 13:53:32.597201] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:49.993  [2024-12-11 13:53:32.597394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78058 ]
00:16:50.251  [2024-12-11 13:53:32.793203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:50.251  [2024-12-11 13:53:32.918359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:50.510  
[2024-12-11T13:53:34.660Z] Copying: 512/512 [B] (average 500 kBps)
00:16:51.888  
00:16:51.888   13:53:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ urb6vfi0e1ji2nndgsdfw1if505sw1jom3z0nk9erosny10dk5mtlgwvxnv4qbvb8wc45ldbjrmuix0gq2ulxt2xgyu87ilc0xi6ga490titbbeyalu6kb3z564g5wjr2o3bg4pyu6fouhwogvd4fhv99agxh8kp1m8pw0hld15y7di012mmu5dpm23uhpd3rgmrl3q35eowqjpf5o1g31jemrc84fk7n1q2rufjzw54pkn3dwr3l27isj8zuq8sw0u9psyd2pwy7cgl2aczjgwswusqkfog7o9a9bx6i24kz6r8qi80istxmcmpdpospns59xawoud6gsm78ujr8s0e5ldev1kv2rdr0c8k3digfwxd86p71a5960v4ao7m0auia19esd5l660ht0k23wt0xwvr5es4mxbcl6eou17jsw12dh8lh4fl5ymz814dypz7c0gvr016a78nqhnz8yrov247xb04q9m7ekeafmm8x8bsv75wtjxlemfqtk9v == \u\r\b\6\v\f\i\0\e\1\j\i\2\n\n\d\g\s\d\f\w\1\i\f\5\0\5\s\w\1\j\o\m\3\z\0\n\k\9\e\r\o\s\n\y\1\0\d\k\5\m\t\l\g\w\v\x\n\v\4\q\b\v\b\8\w\c\4\5\l\d\b\j\r\m\u\i\x\0\g\q\2\u\l\x\t\2\x\g\y\u\8\7\i\l\c\0\x\i\6\g\a\4\9\0\t\i\t\b\b\e\y\a\l\u\6\k\b\3\z\5\6\4\g\5\w\j\r\2\o\3\b\g\4\p\y\u\6\f\o\u\h\w\o\g\v\d\4\f\h\v\9\9\a\g\x\h\8\k\p\1\m\8\p\w\0\h\l\d\1\5\y\7\d\i\0\1\2\m\m\u\5\d\p\m\2\3\u\h\p\d\3\r\g\m\r\l\3\q\3\5\e\o\w\q\j\p\f\5\o\1\g\3\1\j\e\m\r\c\8\4\f\k\7\n\1\q\2\r\u\f\j\z\w\5\4\p\k\n\3\d\w\r\3\l\2\7\i\s\j\8\z\u\q\8\s\w\0\u\9\p\s\y\d\2\p\w\y\7\c\g\l\2\a\c\z\j\g\w\s\w\u\s\q\k\f\o\g\7\o\9\a\9\b\x\6\i\2\4\k\z\6\r\8\q\i\8\0\i\s\t\x\m\c\m\p\d\p\o\s\p\n\s\5\9\x\a\w\o\u\d\6\g\s\m\7\8\u\j\r\8\s\0\e\5\l\d\e\v\1\k\v\2\r\d\r\0\c\8\k\3\d\i\g\f\w\x\d\8\6\p\7\1\a\5\9\6\0\v\4\a\o\7\m\0\a\u\i\a\1\9\e\s\d\5\l\6\6\0\h\t\0\k\2\3\w\t\0\x\w\v\r\5\e\s\4\m\x\b\c\l\6\e\o\u\1\7\j\s\w\1\2\d\h\8\l\h\4\f\l\5\y\m\z\8\1\4\d\y\p\z\7\c\0\g\v\r\0\1\6\a\7\8\n\q\h\n\z\8\y\r\o\v\2\4\7\x\b\0\4\q\9\m\7\e\k\e\a\f\m\m\8\x\8\b\s\v\7\5\w\t\j\x\l\e\m\f\q\t\k\9\v ]]
00:16:51.888   13:53:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:51.888   13:53:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:16:51.888  [2024-12-11 13:53:34.517488] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:51.888  [2024-12-11 13:53:34.517705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78082 ]
00:16:52.147  [2024-12-11 13:53:34.712424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:52.147  [2024-12-11 13:53:34.837682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:52.716  
[2024-12-11T13:53:36.425Z] Copying: 512/512 [B] (average 100 kBps)
00:16:53.653  
00:16:53.653   13:53:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ urb6vfi0e1ji2nndgsdfw1if505sw1jom3z0nk9erosny10dk5mtlgwvxnv4qbvb8wc45ldbjrmuix0gq2ulxt2xgyu87ilc0xi6ga490titbbeyalu6kb3z564g5wjr2o3bg4pyu6fouhwogvd4fhv99agxh8kp1m8pw0hld15y7di012mmu5dpm23uhpd3rgmrl3q35eowqjpf5o1g31jemrc84fk7n1q2rufjzw54pkn3dwr3l27isj8zuq8sw0u9psyd2pwy7cgl2aczjgwswusqkfog7o9a9bx6i24kz6r8qi80istxmcmpdpospns59xawoud6gsm78ujr8s0e5ldev1kv2rdr0c8k3digfwxd86p71a5960v4ao7m0auia19esd5l660ht0k23wt0xwvr5es4mxbcl6eou17jsw12dh8lh4fl5ymz814dypz7c0gvr016a78nqhnz8yrov247xb04q9m7ekeafmm8x8bsv75wtjxlemfqtk9v == \u\r\b\6\v\f\i\0\e\1\j\i\2\n\n\d\g\s\d\f\w\1\i\f\5\0\5\s\w\1\j\o\m\3\z\0\n\k\9\e\r\o\s\n\y\1\0\d\k\5\m\t\l\g\w\v\x\n\v\4\q\b\v\b\8\w\c\4\5\l\d\b\j\r\m\u\i\x\0\g\q\2\u\l\x\t\2\x\g\y\u\8\7\i\l\c\0\x\i\6\g\a\4\9\0\t\i\t\b\b\e\y\a\l\u\6\k\b\3\z\5\6\4\g\5\w\j\r\2\o\3\b\g\4\p\y\u\6\f\o\u\h\w\o\g\v\d\4\f\h\v\9\9\a\g\x\h\8\k\p\1\m\8\p\w\0\h\l\d\1\5\y\7\d\i\0\1\2\m\m\u\5\d\p\m\2\3\u\h\p\d\3\r\g\m\r\l\3\q\3\5\e\o\w\q\j\p\f\5\o\1\g\3\1\j\e\m\r\c\8\4\f\k\7\n\1\q\2\r\u\f\j\z\w\5\4\p\k\n\3\d\w\r\3\l\2\7\i\s\j\8\z\u\q\8\s\w\0\u\9\p\s\y\d\2\p\w\y\7\c\g\l\2\a\c\z\j\g\w\s\w\u\s\q\k\f\o\g\7\o\9\a\9\b\x\6\i\2\4\k\z\6\r\8\q\i\8\0\i\s\t\x\m\c\m\p\d\p\o\s\p\n\s\5\9\x\a\w\o\u\d\6\g\s\m\7\8\u\j\r\8\s\0\e\5\l\d\e\v\1\k\v\2\r\d\r\0\c\8\k\3\d\i\g\f\w\x\d\8\6\p\7\1\a\5\9\6\0\v\4\a\o\7\m\0\a\u\i\a\1\9\e\s\d\5\l\6\6\0\h\t\0\k\2\3\w\t\0\x\w\v\r\5\e\s\4\m\x\b\c\l\6\e\o\u\1\7\j\s\w\1\2\d\h\8\l\h\4\f\l\5\y\m\z\8\1\4\d\y\p\z\7\c\0\g\v\r\0\1\6\a\7\8\n\q\h\n\z\8\y\r\o\v\2\4\7\x\b\0\4\q\9\m\7\e\k\e\a\f\m\m\8\x\8\b\s\v\7\5\w\t\j\x\l\e\m\f\q\t\k\9\v ]]
00:16:53.653   13:53:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:16:53.653   13:53:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:16:53.911  [2024-12-11 13:53:36.443457] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:53.911  [2024-12-11 13:53:36.443677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78103 ]
00:16:53.911  [2024-12-11 13:53:36.636710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:54.169  [2024-12-11 13:53:36.765259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:54.428  
[2024-12-11T13:53:38.575Z] Copying: 512/512 [B] (average 100 kBps)
00:16:55.803  
00:16:55.803  ************************************
00:16:55.803  END TEST dd_flags_misc_forced_aio
00:16:55.803  ************************************
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ urb6vfi0e1ji2nndgsdfw1if505sw1jom3z0nk9erosny10dk5mtlgwvxnv4qbvb8wc45ldbjrmuix0gq2ulxt2xgyu87ilc0xi6ga490titbbeyalu6kb3z564g5wjr2o3bg4pyu6fouhwogvd4fhv99agxh8kp1m8pw0hld15y7di012mmu5dpm23uhpd3rgmrl3q35eowqjpf5o1g31jemrc84fk7n1q2rufjzw54pkn3dwr3l27isj8zuq8sw0u9psyd2pwy7cgl2aczjgwswusqkfog7o9a9bx6i24kz6r8qi80istxmcmpdpospns59xawoud6gsm78ujr8s0e5ldev1kv2rdr0c8k3digfwxd86p71a5960v4ao7m0auia19esd5l660ht0k23wt0xwvr5es4mxbcl6eou17jsw12dh8lh4fl5ymz814dypz7c0gvr016a78nqhnz8yrov247xb04q9m7ekeafmm8x8bsv75wtjxlemfqtk9v == \u\r\b\6\v\f\i\0\e\1\j\i\2\n\n\d\g\s\d\f\w\1\i\f\5\0\5\s\w\1\j\o\m\3\z\0\n\k\9\e\r\o\s\n\y\1\0\d\k\5\m\t\l\g\w\v\x\n\v\4\q\b\v\b\8\w\c\4\5\l\d\b\j\r\m\u\i\x\0\g\q\2\u\l\x\t\2\x\g\y\u\8\7\i\l\c\0\x\i\6\g\a\4\9\0\t\i\t\b\b\e\y\a\l\u\6\k\b\3\z\5\6\4\g\5\w\j\r\2\o\3\b\g\4\p\y\u\6\f\o\u\h\w\o\g\v\d\4\f\h\v\9\9\a\g\x\h\8\k\p\1\m\8\p\w\0\h\l\d\1\5\y\7\d\i\0\1\2\m\m\u\5\d\p\m\2\3\u\h\p\d\3\r\g\m\r\l\3\q\3\5\e\o\w\q\j\p\f\5\o\1\g\3\1\j\e\m\r\c\8\4\f\k\7\n\1\q\2\r\u\f\j\z\w\5\4\p\k\n\3\d\w\r\3\l\2\7\i\s\j\8\z\u\q\8\s\w\0\u\9\p\s\y\d\2\p\w\y\7\c\g\l\2\a\c\z\j\g\w\s\w\u\s\q\k\f\o\g\7\o\9\a\9\b\x\6\i\2\4\k\z\6\r\8\q\i\8\0\i\s\t\x\m\c\m\p\d\p\o\s\p\n\s\5\9\x\a\w\o\u\d\6\g\s\m\7\8\u\j\r\8\s\0\e\5\l\d\e\v\1\k\v\2\r\d\r\0\c\8\k\3\d\i\g\f\w\x\d\8\6\p\7\1\a\5\9\6\0\v\4\a\o\7\m\0\a\u\i\a\1\9\e\s\d\5\l\6\6\0\h\t\0\k\2\3\w\t\0\x\w\v\r\5\e\s\4\m\x\b\c\l\6\e\o\u\1\7\j\s\w\1\2\d\h\8\l\h\4\f\l\5\y\m\z\8\1\4\d\y\p\z\7\c\0\g\v\r\0\1\6\a\7\8\n\q\h\n\z\8\y\r\o\v\2\4\7\x\b\0\4\q\9\m\7\e\k\e\a\f\m\m\8\x\8\b\s\v\7\5\w\t\j\x\l\e\m\f\q\t\k\9\v ]]
00:16:55.803  
00:16:55.803  real	0m15.448s
00:16:55.803  user	0m12.277s
00:16:55.803  sys	0m2.239s
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:16:55.803  
00:16:55.803  real	1m6.658s
00:16:55.803  user	0m50.632s
00:16:55.803  sys	0m10.457s
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:55.803   13:53:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:16:55.803  ************************************
00:16:55.803  END TEST spdk_dd_posix
00:16:55.803  ************************************
00:16:55.803   13:53:38 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:16:55.803   13:53:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:55.803   13:53:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:55.803   13:53:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:16:55.803  ************************************
00:16:55.803  START TEST spdk_dd_malloc
00:16:55.803  ************************************
00:16:55.803   13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:16:55.803  * Looking for test storage...
00:16:55.803  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:55.803      13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version
00:16:55.803      13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-:
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-:
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<'
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1
00:16:55.803     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:55.804     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1
00:16:55.804     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:55.804      13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:56.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:56.063  		--rc genhtml_branch_coverage=1
00:16:56.063  		--rc genhtml_function_coverage=1
00:16:56.063  		--rc genhtml_legend=1
00:16:56.063  		--rc geninfo_all_blocks=1
00:16:56.063  		--rc geninfo_unexecuted_blocks=1
00:16:56.063  		
00:16:56.063  		'
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:56.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:56.063  		--rc genhtml_branch_coverage=1
00:16:56.063  		--rc genhtml_function_coverage=1
00:16:56.063  		--rc genhtml_legend=1
00:16:56.063  		--rc geninfo_all_blocks=1
00:16:56.063  		--rc geninfo_unexecuted_blocks=1
00:16:56.063  		
00:16:56.063  		'
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:56.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:56.063  		--rc genhtml_branch_coverage=1
00:16:56.063  		--rc genhtml_function_coverage=1
00:16:56.063  		--rc genhtml_legend=1
00:16:56.063  		--rc geninfo_all_blocks=1
00:16:56.063  		--rc geninfo_unexecuted_blocks=1
00:16:56.063  		
00:16:56.063  		'
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:56.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:56.063  		--rc genhtml_branch_coverage=1
00:16:56.063  		--rc genhtml_function_coverage=1
00:16:56.063  		--rc genhtml_legend=1
00:16:56.063  		--rc geninfo_all_blocks=1
00:16:56.063  		--rc geninfo_unexecuted_blocks=1
00:16:56.063  		
00:16:56.063  		'
00:16:56.063    13:53:38 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:56.063     13:53:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:56.063      13:53:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:16:56.063      13:53:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:16:56.063      13:53:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:16:56.064      13:53:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:16:56.064      13:53:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # export PATH
00:16:56.064      13:53:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x
00:16:56.064  ************************************
00:16:56.064  START TEST dd_malloc_copy
00:16:56.064  ************************************
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512')
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512')
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1
00:16:56.064   13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62
00:16:56.064    13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf
00:16:56.064    13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable
00:16:56.064    13:53:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:16:56.064  {
00:16:56.064    "subsystems": [
00:16:56.064      {
00:16:56.064        "subsystem": "bdev",
00:16:56.064        "config": [
00:16:56.064          {
00:16:56.064            "params": {
00:16:56.064              "block_size": 512,
00:16:56.064              "num_blocks": 1048576,
00:16:56.064              "name": "malloc0"
00:16:56.064            },
00:16:56.064            "method": "bdev_malloc_create"
00:16:56.064          },
00:16:56.064          {
00:16:56.064            "params": {
00:16:56.064              "block_size": 512,
00:16:56.064              "num_blocks": 1048576,
00:16:56.064              "name": "malloc1"
00:16:56.064            },
00:16:56.064            "method": "bdev_malloc_create"
00:16:56.064          },
00:16:56.064          {
00:16:56.064            "method": "bdev_wait_for_examine"
00:16:56.064          }
00:16:56.064        ]
00:16:56.064      }
00:16:56.064    ]
00:16:56.064  }
00:16:56.064  [2024-12-11 13:53:38.680824] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:16:56.064  [2024-12-11 13:53:38.681016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78204 ]
00:16:56.322  [2024-12-11 13:53:38.878462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:56.322  [2024-12-11 13:53:39.011000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:58.852  
[2024-12-11T13:53:42.558Z] Copying: 197/512 [MB] (197 MBps)
[2024-12-11T13:53:43.126Z] Copying: 403/512 [MB] (205 MBps)
[2024-12-11T13:53:47.318Z] Copying: 512/512 [MB] (average 202 MBps)
00:17:04.546  
00:17:04.546   13:53:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62
00:17:04.546    13:53:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf
00:17:04.546    13:53:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable
00:17:04.546    13:53:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:17:04.546  {
00:17:04.546    "subsystems": [
00:17:04.546      {
00:17:04.546        "subsystem": "bdev",
00:17:04.546        "config": [
00:17:04.546          {
00:17:04.546            "params": {
00:17:04.546              "block_size": 512,
00:17:04.546              "num_blocks": 1048576,
00:17:04.546              "name": "malloc0"
00:17:04.546            },
00:17:04.546            "method": "bdev_malloc_create"
00:17:04.546          },
00:17:04.546          {
00:17:04.546            "params": {
00:17:04.546              "block_size": 512,
00:17:04.546              "num_blocks": 1048576,
00:17:04.546              "name": "malloc1"
00:17:04.546            },
00:17:04.546            "method": "bdev_malloc_create"
00:17:04.546          },
00:17:04.546          {
00:17:04.546            "method": "bdev_wait_for_examine"
00:17:04.546          }
00:17:04.546        ]
00:17:04.546      }
00:17:04.546    ]
00:17:04.546  }
00:17:04.546  [2024-12-11 13:53:47.150098] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:04.546  [2024-12-11 13:53:47.150281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78303 ]
00:17:04.805  [2024-12-11 13:53:47.348781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:04.805  [2024-12-11 13:53:47.480016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:07.341  
[2024-12-11T13:53:51.053Z] Copying: 205/512 [MB] (205 MBps)
[2024-12-11T13:53:51.632Z] Copying: 408/512 [MB] (202 MBps)
[2024-12-11T13:53:55.826Z] Copying: 512/512 [MB] (average 204 MBps)
00:17:13.054  
00:17:13.054  
00:17:13.054  real	0m16.912s
00:17:13.054  user	0m15.344s
00:17:13.054  sys	0m1.379s
00:17:13.054   13:53:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:13.054   13:53:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:17:13.054  ************************************
00:17:13.054  END TEST dd_malloc_copy
00:17:13.054  ************************************
00:17:13.054  
00:17:13.054  real	0m17.156s
00:17:13.054  user	0m15.462s
00:17:13.054  sys	0m1.524s
00:17:13.054   13:53:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:13.054   13:53:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x
00:17:13.054  ************************************
00:17:13.054  END TEST spdk_dd_malloc
00:17:13.054  ************************************
00:17:13.054   13:53:55 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0
00:17:13.054   13:53:55 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:13.054   13:53:55 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:13.054   13:53:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:17:13.054  ************************************
00:17:13.054  START TEST spdk_dd_bdev_to_bdev
00:17:13.054  ************************************
00:17:13.054   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0
00:17:13.054  * Looking for test storage...
00:17:13.054  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-:
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-:
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<'
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:13.054  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.054  		--rc genhtml_branch_coverage=1
00:17:13.054  		--rc genhtml_function_coverage=1
00:17:13.054  		--rc genhtml_legend=1
00:17:13.054  		--rc geninfo_all_blocks=1
00:17:13.054  		--rc geninfo_unexecuted_blocks=1
00:17:13.054  		
00:17:13.054  		'
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:13.054  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.054  		--rc genhtml_branch_coverage=1
00:17:13.054  		--rc genhtml_function_coverage=1
00:17:13.054  		--rc genhtml_legend=1
00:17:13.054  		--rc geninfo_all_blocks=1
00:17:13.054  		--rc geninfo_unexecuted_blocks=1
00:17:13.054  		
00:17:13.054  		'
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:13.054  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.054  		--rc genhtml_branch_coverage=1
00:17:13.054  		--rc genhtml_function_coverage=1
00:17:13.054  		--rc genhtml_legend=1
00:17:13.054  		--rc geninfo_all_blocks=1
00:17:13.054  		--rc geninfo_unexecuted_blocks=1
00:17:13.054  		
00:17:13.054  		'
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:13.054  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.054  		--rc genhtml_branch_coverage=1
00:17:13.054  		--rc genhtml_function_coverage=1
00:17:13.054  		--rc genhtml_legend=1
00:17:13.054  		--rc geninfo_all_blocks=1
00:17:13.054  		--rc geninfo_unexecuted_blocks=1
00:17:13.054  		
00:17:13.054  		'
00:17:13.054    13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:13.054     13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:13.054      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:13.055      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:13.055      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # export PATH
00:17:13.055      13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@")
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 ))
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie')
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096')
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0
00:17:13.055   13:53:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256
00:17:13.313  [2024-12-11 13:53:55.915487] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:13.313  [2024-12-11 13:53:55.915700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78470 ]
00:17:13.572  [2024-12-11 13:53:56.111966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:13.572  [2024-12-11 13:53:56.240486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:14.141  
[2024-12-11T13:53:58.306Z] Copying: 256/256 [MB] (average 1354 MBps)
00:17:15.534  
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it'
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it'
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:15.535  ************************************
00:17:15.535  START TEST dd_inflate_file
00:17:15.535  ************************************
00:17:15.535   13:53:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:17:15.535  [2024-12-11 13:53:58.039137] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:15.535  [2024-12-11 13:53:58.039329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78497 ]
00:17:15.535  [2024-12-11 13:53:58.233676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:15.794  [2024-12-11 13:53:58.357340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:16.054  
[2024-12-11T13:54:00.206Z] Copying: 64/64 [MB] (average 1084 MBps)
00:17:17.434  
00:17:17.434  
00:17:17.434  real	0m1.966s
00:17:17.434  user	0m1.525s
00:17:17.434  sys	0m0.328s
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:17.434  ************************************
00:17:17.434  END TEST dd_inflate_file
00:17:17.434  ************************************
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x
00:17:17.434    13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:17.434    13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf
00:17:17.434    13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:17:17.434    13:53:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:17.434  ************************************
00:17:17.434  START TEST dd_copy_to_out_bdev
00:17:17.434  ************************************
00:17:17.434   13:53:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:17:17.434  {
00:17:17.434    "subsystems": [
00:17:17.434      {
00:17:17.434        "subsystem": "bdev",
00:17:17.434        "config": [
00:17:17.434          {
00:17:17.434            "params": {
00:17:17.434              "block_size": 4096,
00:17:17.434              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:17.434              "name": "aio1"
00:17:17.434            },
00:17:17.434            "method": "bdev_aio_create"
00:17:17.434          },
00:17:17.434          {
00:17:17.434            "params": {
00:17:17.434              "trtype": "pcie",
00:17:17.434              "traddr": "0000:00:10.0",
00:17:17.434              "name": "Nvme0"
00:17:17.434            },
00:17:17.434            "method": "bdev_nvme_attach_controller"
00:17:17.434          },
00:17:17.434          {
00:17:17.434            "method": "bdev_wait_for_examine"
00:17:17.434          }
00:17:17.434        ]
00:17:17.434      }
00:17:17.434    ]
00:17:17.434  }
00:17:17.434  [2024-12-11 13:54:00.069541] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:17.434  [2024-12-11 13:54:00.069760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78545 ]
00:17:17.702  [2024-12-11 13:54:00.266764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:17.702  [2024-12-11 13:54:00.392442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:19.082  
[2024-12-11T13:54:03.233Z] Copying: 64/64 [MB] (average 70 MBps)
00:17:20.461  
00:17:20.461  
00:17:20.461  real	0m2.879s
00:17:20.461  user	0m2.411s
00:17:20.461  sys	0m0.369s
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:20.461  ************************************
00:17:20.461  END TEST dd_copy_to_out_bdev
00:17:20.461  ************************************
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:20.461  ************************************
00:17:20.461  START TEST dd_offset_magic
00:17:20.461  ************************************
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64)
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:17:20.461   13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62
00:17:20.461    13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:17:20.461    13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:17:20.461    13:54:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:17:20.461  {
00:17:20.461    "subsystems": [
00:17:20.461      {
00:17:20.461        "subsystem": "bdev",
00:17:20.461        "config": [
00:17:20.461          {
00:17:20.461            "params": {
00:17:20.461              "block_size": 4096,
00:17:20.461              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:20.461              "name": "aio1"
00:17:20.461            },
00:17:20.461            "method": "bdev_aio_create"
00:17:20.461          },
00:17:20.461          {
00:17:20.461            "params": {
00:17:20.461              "trtype": "pcie",
00:17:20.461              "traddr": "0000:00:10.0",
00:17:20.461              "name": "Nvme0"
00:17:20.461            },
00:17:20.461            "method": "bdev_nvme_attach_controller"
00:17:20.461          },
00:17:20.461          {
00:17:20.461            "method": "bdev_wait_for_examine"
00:17:20.461          }
00:17:20.461        ]
00:17:20.461      }
00:17:20.461    ]
00:17:20.461  }
00:17:20.461  [2024-12-11 13:54:02.982494] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:20.461  [2024-12-11 13:54:02.982651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78597 ]
00:17:20.461  [2024-12-11 13:54:03.157938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:20.720  [2024-12-11 13:54:03.284635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:21.295  
[2024-12-11T13:54:05.004Z] Copying: 65/65 [MB] (average 1250 MBps)
00:17:22.232  
00:17:22.232   13:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62
00:17:22.232    13:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:17:22.232    13:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:17:22.232    13:54:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:17:22.232  {
00:17:22.232    "subsystems": [
00:17:22.232      {
00:17:22.232        "subsystem": "bdev",
00:17:22.232        "config": [
00:17:22.232          {
00:17:22.232            "params": {
00:17:22.232              "block_size": 4096,
00:17:22.232              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:22.232              "name": "aio1"
00:17:22.232            },
00:17:22.232            "method": "bdev_aio_create"
00:17:22.232          },
00:17:22.232          {
00:17:22.232            "params": {
00:17:22.232              "trtype": "pcie",
00:17:22.232              "traddr": "0000:00:10.0",
00:17:22.232              "name": "Nvme0"
00:17:22.232            },
00:17:22.232            "method": "bdev_nvme_attach_controller"
00:17:22.232          },
00:17:22.232          {
00:17:22.232            "method": "bdev_wait_for_examine"
00:17:22.232          }
00:17:22.232        ]
00:17:22.232      }
00:17:22.232    ]
00:17:22.232  }
00:17:22.492  [2024-12-11 13:54:05.048817] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:22.492  [2024-12-11 13:54:05.049013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78629 ]
00:17:22.492  [2024-12-11 13:54:05.245473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:22.751  [2024-12-11 13:54:05.372822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:23.319  
[2024-12-11T13:54:07.031Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:17:24.259  
00:17:24.259   13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:17:24.259   13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:17:24.259   13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:17:24.259   13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62
00:17:24.259    13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:17:24.259    13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:17:24.259    13:54:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:17:24.259  {
00:17:24.259    "subsystems": [
00:17:24.259      {
00:17:24.259        "subsystem": "bdev",
00:17:24.259        "config": [
00:17:24.259          {
00:17:24.259            "params": {
00:17:24.259              "block_size": 4096,
00:17:24.259              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:24.259              "name": "aio1"
00:17:24.259            },
00:17:24.259            "method": "bdev_aio_create"
00:17:24.259          },
00:17:24.259          {
00:17:24.259            "params": {
00:17:24.259              "trtype": "pcie",
00:17:24.259              "traddr": "0000:00:10.0",
00:17:24.259              "name": "Nvme0"
00:17:24.259            },
00:17:24.259            "method": "bdev_nvme_attach_controller"
00:17:24.259          },
00:17:24.259          {
00:17:24.259            "method": "bdev_wait_for_examine"
00:17:24.259          }
00:17:24.259        ]
00:17:24.259      }
00:17:24.259    ]
00:17:24.259  }
00:17:24.259  [2024-12-11 13:54:06.954237] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:24.259  [2024-12-11 13:54:06.954425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78654 ]
00:17:24.519  [2024-12-11 13:54:07.145957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:24.519  [2024-12-11 13:54:07.272314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:25.089  
[2024-12-11T13:54:09.242Z] Copying: 65/65 [MB] (average 1181 MBps)
00:17:26.470  
00:17:26.470   13:54:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62
00:17:26.470    13:54:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:17:26.470    13:54:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:17:26.470    13:54:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:17:26.470  {
00:17:26.470    "subsystems": [
00:17:26.470      {
00:17:26.470        "subsystem": "bdev",
00:17:26.470        "config": [
00:17:26.470          {
00:17:26.470            "params": {
00:17:26.470              "block_size": 4096,
00:17:26.470              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:26.470              "name": "aio1"
00:17:26.470            },
00:17:26.470            "method": "bdev_aio_create"
00:17:26.470          },
00:17:26.470          {
00:17:26.470            "params": {
00:17:26.470              "trtype": "pcie",
00:17:26.470              "traddr": "0000:00:10.0",
00:17:26.470              "name": "Nvme0"
00:17:26.470            },
00:17:26.470            "method": "bdev_nvme_attach_controller"
00:17:26.470          },
00:17:26.470          {
00:17:26.470            "method": "bdev_wait_for_examine"
00:17:26.470          }
00:17:26.470        ]
00:17:26.470      }
00:17:26.470    ]
00:17:26.470  }
00:17:26.470  [2024-12-11 13:54:09.040357] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:26.470  [2024-12-11 13:54:09.040544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78682 ]
00:17:26.470  [2024-12-11 13:54:09.238814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:26.730  [2024-12-11 13:54:09.365838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:27.299  
[2024-12-11T13:54:11.008Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:17:28.236  
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:17:28.236  
00:17:28.236  real	0m7.975s
00:17:28.236  user	0m6.363s
00:17:28.236  sys	0m1.175s
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:28.236  ************************************
00:17:28.236  END TEST dd_offset_magic
00:17:28.236  ************************************
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref=
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5
00:17:28.236   13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62
00:17:28.236    13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf
00:17:28.236    13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:17:28.236    13:54:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:28.236  {
00:17:28.236    "subsystems": [
00:17:28.236      {
00:17:28.236        "subsystem": "bdev",
00:17:28.236        "config": [
00:17:28.236          {
00:17:28.236            "params": {
00:17:28.236              "block_size": 4096,
00:17:28.236              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:28.236              "name": "aio1"
00:17:28.236            },
00:17:28.236            "method": "bdev_aio_create"
00:17:28.236          },
00:17:28.236          {
00:17:28.236            "params": {
00:17:28.236              "trtype": "pcie",
00:17:28.236              "traddr": "0000:00:10.0",
00:17:28.236              "name": "Nvme0"
00:17:28.236            },
00:17:28.236            "method": "bdev_nvme_attach_controller"
00:17:28.236          },
00:17:28.236          {
00:17:28.236            "method": "bdev_wait_for_examine"
00:17:28.236          }
00:17:28.236        ]
00:17:28.236      }
00:17:28.236    ]
00:17:28.236  }
00:17:28.236  [2024-12-11 13:54:11.019021] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:28.236  [2024-12-11 13:54:11.019214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78729 ]
00:17:28.495  [2024-12-11 13:54:11.214109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:28.754  [2024-12-11 13:54:11.340363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:29.345  
[2024-12-11T13:54:13.062Z] Copying: 5120/5120 [kB] (average 833 MBps)
00:17:30.290  
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref=
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5
00:17:30.290   13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62
00:17:30.290    13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf
00:17:30.290    13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:17:30.290    13:54:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:30.290  {
00:17:30.290    "subsystems": [
00:17:30.290      {
00:17:30.290        "subsystem": "bdev",
00:17:30.290        "config": [
00:17:30.290          {
00:17:30.290            "params": {
00:17:30.290              "block_size": 4096,
00:17:30.290              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:17:30.290              "name": "aio1"
00:17:30.290            },
00:17:30.290            "method": "bdev_aio_create"
00:17:30.290          },
00:17:30.290          {
00:17:30.290            "params": {
00:17:30.290              "trtype": "pcie",
00:17:30.290              "traddr": "0000:00:10.0",
00:17:30.290              "name": "Nvme0"
00:17:30.290            },
00:17:30.290            "method": "bdev_nvme_attach_controller"
00:17:30.290          },
00:17:30.290          {
00:17:30.290            "method": "bdev_wait_for_examine"
00:17:30.290          }
00:17:30.290        ]
00:17:30.290      }
00:17:30.290    ]
00:17:30.290  }
00:17:30.548  [2024-12-11 13:54:13.087850] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:30.548  [2024-12-11 13:54:13.088042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78761 ]
00:17:30.548  [2024-12-11 13:54:13.282262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:30.807  [2024-12-11 13:54:13.408424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:31.376  
[2024-12-11T13:54:15.086Z] Copying: 5120/5120 [kB] (average 1000 MBps)
00:17:32.314  
00:17:32.314   13:54:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1
00:17:32.314  ************************************
00:17:32.314  END TEST spdk_dd_bdev_to_bdev
00:17:32.314  ************************************
00:17:32.314  
00:17:32.314  real	0m19.407s
00:17:32.314  user	0m15.237s
00:17:32.314  sys	0m3.169s
00:17:32.314   13:54:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:32.314   13:54:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:32.314   13:54:15 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 ))
00:17:32.314   13:54:15 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:17:32.314   13:54:15 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:32.314   13:54:15 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:32.314   13:54:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:17:32.314  ************************************
00:17:32.314  START TEST spdk_dd_sparse
00:17:32.314  ************************************
00:17:32.314   13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:17:32.574  * Looking for test storage...
00:17:32.574  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-:
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-:
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<'
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:32.574      13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2
00:17:32.574     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:32.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:32.575  		--rc genhtml_branch_coverage=1
00:17:32.575  		--rc genhtml_function_coverage=1
00:17:32.575  		--rc genhtml_legend=1
00:17:32.575  		--rc geninfo_all_blocks=1
00:17:32.575  		--rc geninfo_unexecuted_blocks=1
00:17:32.575  		
00:17:32.575  		'
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:32.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:32.575  		--rc genhtml_branch_coverage=1
00:17:32.575  		--rc genhtml_function_coverage=1
00:17:32.575  		--rc genhtml_legend=1
00:17:32.575  		--rc geninfo_all_blocks=1
00:17:32.575  		--rc geninfo_unexecuted_blocks=1
00:17:32.575  		
00:17:32.575  		'
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:32.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:32.575  		--rc genhtml_branch_coverage=1
00:17:32.575  		--rc genhtml_function_coverage=1
00:17:32.575  		--rc genhtml_legend=1
00:17:32.575  		--rc geninfo_all_blocks=1
00:17:32.575  		--rc geninfo_unexecuted_blocks=1
00:17:32.575  		
00:17:32.575  		'
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:32.575  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:32.575  		--rc genhtml_branch_coverage=1
00:17:32.575  		--rc genhtml_function_coverage=1
00:17:32.575  		--rc genhtml_legend=1
00:17:32.575  		--rc geninfo_all_blocks=1
00:17:32.575  		--rc geninfo_unexecuted_blocks=1
00:17:32.575  		
00:17:32.575  		'
00:17:32.575    13:54:15 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:32.575     13:54:15 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:32.575      13:54:15 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:32.575      13:54:15 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:32.575      13:54:15 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:32.575      13:54:15 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:32.575      13:54:15 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # export PATH
00:17:32.575      13:54:15 spdk_dd.spdk_dd_sparse -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1
00:17:32.575  1+0 records in
00:17:32.575  1+0 records out
00:17:32.575  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00918361 s, 457 MB/s
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4
00:17:32.575  1+0 records in
00:17:32.575  1+0 records out
00:17:32.575  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00728298 s, 576 MB/s
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8
00:17:32.575  1+0 records in
00:17:32.575  1+0 records out
00:17:32.575  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00773488 s, 542 MB/s
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:17:32.575  ************************************
00:17:32.575  START TEST dd_sparse_file_to_file
00:17:32.575  ************************************
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore')
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1
00:17:32.575    13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf
00:17:32.575   13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62
00:17:32.575    13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable
00:17:32.575    13:54:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x
00:17:32.575  {
00:17:32.575    "subsystems": [
00:17:32.575      {
00:17:32.575        "subsystem": "bdev",
00:17:32.575        "config": [
00:17:32.575          {
00:17:32.575            "params": {
00:17:32.575              "block_size": 4096,
00:17:32.575              "filename": "dd_sparse_aio_disk",
00:17:32.575              "name": "dd_aio"
00:17:32.575            },
00:17:32.575            "method": "bdev_aio_create"
00:17:32.575          },
00:17:32.575          {
00:17:32.575            "params": {
00:17:32.575              "lvs_name": "dd_lvstore",
00:17:32.575              "bdev_name": "dd_aio"
00:17:32.575            },
00:17:32.575            "method": "bdev_lvol_create_lvstore"
00:17:32.575          },
00:17:32.575          {
00:17:32.575            "method": "bdev_wait_for_examine"
00:17:32.575          }
00:17:32.575        ]
00:17:32.575      }
00:17:32.575    ]
00:17:32.575  }
00:17:32.835  [2024-12-11 13:54:15.402957] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:32.835  [2024-12-11 13:54:15.403154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78855 ]
00:17:32.835  [2024-12-11 13:54:15.597390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:33.093  [2024-12-11 13:54:15.727035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:33.661  
[2024-12-11T13:54:17.816Z] Copying: 12/36 [MB] (average 923 MBps)
00:17:35.044  
00:17:35.044    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736
00:17:35.044    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:17:35.044    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576
00:17:35.044    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2
00:17:35.044  ************************************
00:17:35.044  END TEST dd_sparse_file_to_file
00:17:35.044  ************************************
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]]
00:17:35.044  
00:17:35.044  real	0m2.213s
00:17:35.044  user	0m1.742s
00:17:35.044  sys	0m0.339s
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:35.044   13:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:17:35.044  ************************************
00:17:35.044  START TEST dd_sparse_file_to_bdev
00:17:35.045  ************************************
00:17:35.045   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev
00:17:35.045   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:17:35.045   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0
00:17:35.045   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true')
00:17:35.045   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1
00:17:35.045   13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62
00:17:35.045    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf
00:17:35.045    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:17:35.045    13:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:35.045  {
00:17:35.045    "subsystems": [
00:17:35.045      {
00:17:35.045        "subsystem": "bdev",
00:17:35.045        "config": [
00:17:35.045          {
00:17:35.045            "params": {
00:17:35.045              "block_size": 4096,
00:17:35.045              "filename": "dd_sparse_aio_disk",
00:17:35.045              "name": "dd_aio"
00:17:35.045            },
00:17:35.045            "method": "bdev_aio_create"
00:17:35.045          },
00:17:35.045          {
00:17:35.045            "params": {
00:17:35.045              "lvs_name": "dd_lvstore",
00:17:35.045              "lvol_name": "dd_lvol",
00:17:35.045              "size_in_mib": 36,
00:17:35.045              "thin_provision": true
00:17:35.045            },
00:17:35.045            "method": "bdev_lvol_create"
00:17:35.045          },
00:17:35.045          {
00:17:35.045            "method": "bdev_wait_for_examine"
00:17:35.045          }
00:17:35.045        ]
00:17:35.045      }
00:17:35.045    ]
00:17:35.045  }
00:17:35.045  [2024-12-11 13:54:17.660736] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:35.045  [2024-12-11 13:54:17.660950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78912 ]
00:17:35.305  [2024-12-11 13:54:17.843528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:35.305  [2024-12-11 13:54:17.994771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:35.874  
[2024-12-11T13:54:20.028Z] Copying: 12/36 [MB] (average 500 MBps)
00:17:37.256  
00:17:37.256  
00:17:37.256  real	0m2.158s
00:17:37.256  user	0m1.747s
00:17:37.256  sys	0m0.312s
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:37.256  ************************************
00:17:37.256  END TEST dd_sparse_file_to_bdev
00:17:37.256  ************************************
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:17:37.256  ************************************
00:17:37.256  START TEST dd_sparse_bdev_to_file
00:17:37.256  ************************************
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0
00:17:37.256   13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62
00:17:37.256    13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf
00:17:37.256    13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable
00:17:37.256    13:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x
00:17:37.256  {
00:17:37.256    "subsystems": [
00:17:37.256      {
00:17:37.256        "subsystem": "bdev",
00:17:37.256        "config": [
00:17:37.256          {
00:17:37.256            "params": {
00:17:37.256              "block_size": 4096,
00:17:37.256              "filename": "dd_sparse_aio_disk",
00:17:37.256              "name": "dd_aio"
00:17:37.256            },
00:17:37.256            "method": "bdev_aio_create"
00:17:37.256          },
00:17:37.256          {
00:17:37.256            "method": "bdev_wait_for_examine"
00:17:37.256          }
00:17:37.256        ]
00:17:37.256      }
00:17:37.256    ]
00:17:37.256  }
00:17:37.256  [2024-12-11 13:54:19.881882] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:37.256  [2024-12-11 13:54:19.882096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78960 ]
00:17:37.516  [2024-12-11 13:54:20.079950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:37.516  [2024-12-11 13:54:20.263409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:38.084  
[2024-12-11T13:54:22.235Z] Copying: 12/36 [MB] (average 1090 MBps)
00:17:39.463  
00:17:39.463    13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736
00:17:39.463    13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:17:39.463    13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576
00:17:39.463    13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]]
00:17:39.463  
00:17:39.463  real	0m2.174s
00:17:39.463  user	0m1.751s
00:17:39.463  sys	0m0.318s
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:39.463   13:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x
00:17:39.463  ************************************
00:17:39.463  END TEST dd_sparse_bdev_to_file
00:17:39.463  ************************************
00:17:39.463   13:54:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup
00:17:39.464   13:54:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk
00:17:39.464   13:54:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1
00:17:39.464   13:54:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2
00:17:39.464   13:54:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3
00:17:39.464  
00:17:39.464  real	0m6.964s
00:17:39.464  user	0m5.390s
00:17:39.464  sys	0m1.247s
00:17:39.464   13:54:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:39.464   13:54:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:17:39.464  ************************************
00:17:39.464  END TEST spdk_dd_sparse
00:17:39.464  ************************************
00:17:39.464   13:54:22 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:17:39.464   13:54:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:39.464   13:54:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:39.464   13:54:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:17:39.464  ************************************
00:17:39.464  START TEST spdk_dd_negative
00:17:39.464  ************************************
00:17:39.464   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:17:39.464  * Looking for test storage...
00:17:39.464  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:17:39.464     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:39.464      13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version
00:17:39.464      13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-:
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-:
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<'
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1
00:17:39.724     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2
00:17:39.724      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:39.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:39.725  		--rc genhtml_branch_coverage=1
00:17:39.725  		--rc genhtml_function_coverage=1
00:17:39.725  		--rc genhtml_legend=1
00:17:39.725  		--rc geninfo_all_blocks=1
00:17:39.725  		--rc geninfo_unexecuted_blocks=1
00:17:39.725  		
00:17:39.725  		'
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:39.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:39.725  		--rc genhtml_branch_coverage=1
00:17:39.725  		--rc genhtml_function_coverage=1
00:17:39.725  		--rc genhtml_legend=1
00:17:39.725  		--rc geninfo_all_blocks=1
00:17:39.725  		--rc geninfo_unexecuted_blocks=1
00:17:39.725  		
00:17:39.725  		'
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:39.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:39.725  		--rc genhtml_branch_coverage=1
00:17:39.725  		--rc genhtml_function_coverage=1
00:17:39.725  		--rc genhtml_legend=1
00:17:39.725  		--rc geninfo_all_blocks=1
00:17:39.725  		--rc geninfo_unexecuted_blocks=1
00:17:39.725  		
00:17:39.725  		'
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:39.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:39.725  		--rc genhtml_branch_coverage=1
00:17:39.725  		--rc genhtml_function_coverage=1
00:17:39.725  		--rc genhtml_legend=1
00:17:39.725  		--rc geninfo_all_blocks=1
00:17:39.725  		--rc geninfo_unexecuted_blocks=1
00:17:39.725  		
00:17:39.725  		'
00:17:39.725    13:54:22 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:39.725     13:54:22 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # export PATH
00:17:39.725      13:54:22 spdk_dd.spdk_dd_negative -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:39.725  ************************************
00:17:39.725  START TEST dd_invalid_arguments
00:17:39.725  ************************************
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.725    13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.725    13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:39.725   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:17:39.725  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii='
00:17:39.725  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options]
00:17:39.725  
00:17:39.725  CPU options:
00:17:39.725   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:17:39.725                                   (like [0,1,10])
00:17:39.725       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:17:39.725                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:17:39.725                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:17:39.726                             Within the group, '-' is used for range separator,
00:17:39.726                             ',' is used for single number separator.
00:17:39.726                             '( )' can be omitted for single element group,
00:17:39.726                             '@' can be omitted if cpus and lcores have the same value
00:17:39.726       --disable-cpumask-locks    Disable CPU core lock files.
00:17:39.726       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:17:39.726                             pollers in the app support interrupt mode)
00:17:39.726   -p, --main-core <id>      main (primary) core for DPDK
00:17:39.726  
00:17:39.726  Configuration options:
00:17:39.726   -c, --config, --json  <config>     JSON config file
00:17:39.726   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:17:39.726       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:17:39.726       --wait-for-rpc        wait for RPCs to initialize subsystems
00:17:39.726       --rpcs-allowed	   comma-separated list of permitted RPCS
00:17:39.726       --json-ignore-init-errors    don't exit on invalid config entry
00:17:39.726  
00:17:39.726  Memory options:
00:17:39.726       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:17:39.726       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:17:39.726       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:17:39.726   -R, --huge-unlink         unlink huge files after initialization
00:17:39.726   -n, --mem-channels <num>  number of memory channels used for DPDK
00:17:39.726   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:17:39.726       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:17:39.726       --no-huge             run without using hugepages
00:17:39.726       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:17:39.726   -i, --shm-id <id>         shared memory ID (optional)
00:17:39.726   -g, --single-file-segments   force creating just one hugetlbfs file
00:17:39.726  
00:17:39.726  PCI options:
00:17:39.726   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:17:39.726   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:17:39.726   -u, --no-pci              disable PCI access
00:17:39.726       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:17:39.726  
00:17:39.726  Log options:
00:17:39.726   -L, --logflag <flag>      enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 
00:17:39.726                             app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 
00:17:39.726                             bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 
00:17:39.726                             blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 
00:17:39.726                             blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 
00:17:39.726                             iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 
00:17:39.726                             nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 
00:17:39.726                             sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, vbdev_gpt, 
00:17:39.726                             vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 
00:17:39.726                             vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, 
00:17:39.726                             virtio_user, virtio_vfio_user, vmd)
00:17:39.726       --silence-noticelog   disable notice level logging to stderr
00:17:39.726  
00:17:39.726  Trace options:
00:17:39.726       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:17:39.726                                   setting 0 to disable trace (default 32768)
00:17:39.726                                   Tracepoints vary in size and can use more than one trace entry.
00:17:39.726   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:17:39.726               [2024-12-11 13:54:22.417524] spdk_dd.c:1478:main: *ERROR*: Invalid arguments
00:17:39.726                group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 
00:17:39.726                             blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 
00:17:39.726                             bdev_raid, scheduler, all).
00:17:39.726                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:17:39.726                             a tracepoint group. First tpoint inside a group can be enabled by
00:17:39.726                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:17:39.726                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:17:39.726                             in /include/spdk_internal/trace_defs.h
00:17:39.726  
00:17:39.726  Other options:
00:17:39.726   -h, --help                show this usage
00:17:39.726   -v, --version             print SPDK version
00:17:39.726   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:17:39.726       --env-context         Opaque context for use of the env implementation
00:17:39.726  
00:17:39.726  Application specific:
00:17:39.726  [--------- DD Options ---------]
00:17:39.726   --if Input file. Must specify either --if or --ib.
00:17:39.726   --ib Input bdev. Must specifier either --if or --ib
00:17:39.726   --of Output file. Must specify either --of or --ob.
00:17:39.726   --ob Output bdev. Must specify either --of or --ob.
00:17:39.726   --iflag Input file flags.
00:17:39.726   --oflag Output file flags.
00:17:39.726   --bs I/O unit size (default: 4096)
00:17:39.726   --qd Queue depth (default: 2)
00:17:39.726   --count I/O unit count. The number of I/O units to copy. (default: all)
00:17:39.726   --skip Skip this many I/O units at start of input. (default: 0)
00:17:39.726   --seek Skip this many I/O units at start of output. (default: 0)
00:17:39.726   --aio Force usage of AIO. (by default io_uring is used if available)
00:17:39.726   --sparse Enable hole skipping in input target
00:17:39.726   Available iflag and oflag values:
00:17:39.726    append - append mode
00:17:39.726    direct - use direct I/O for data
00:17:39.726    directory - fail unless a directory
00:17:39.726    dsync - use synchronized I/O for data
00:17:39.726    noatime - do not update access time
00:17:39.726    noctty - do not assign controlling terminal from file
00:17:39.726    nofollow - do not follow symlinks
00:17:39.726    nonblock - use non-blocking I/O
00:17:39.726    sync - use synchronized I/O for data and metadata
00:17:39.726   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2
00:17:39.726   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:39.726   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:39.726   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:39.726  
00:17:39.726  real	0m0.157s
00:17:39.726  user	0m0.082s
00:17:39.726  sys	0m0.076s
00:17:39.726   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:39.726   13:54:22 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x
00:17:39.726  ************************************
00:17:39.726  END TEST dd_invalid_arguments
00:17:39.726  ************************************
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:39.987  ************************************
00:17:39.987  START TEST dd_double_input
00:17:39.987  ************************************
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.987    13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.987    13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:17:39.987  [2024-12-11 13:54:22.631917] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both.
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:39.987  
00:17:39.987  real	0m0.155s
00:17:39.987  user	0m0.084s
00:17:39.987  sys	0m0.072s
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x
00:17:39.987  ************************************
00:17:39.987  END TEST dd_double_input
00:17:39.987  ************************************
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:39.987  ************************************
00:17:39.987  START TEST dd_double_output
00:17:39.987  ************************************
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.987    13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.987    13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:39.987   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:17:40.247  [2024-12-11 13:54:22.828372] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both.
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:40.247  
00:17:40.247  real	0m0.125s
00:17:40.247  user	0m0.057s
00:17:40.247  sys	0m0.069s
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x
00:17:40.247  ************************************
00:17:40.247  END TEST dd_double_output
00:17:40.247  ************************************
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:40.247  ************************************
00:17:40.247  START TEST dd_no_input
00:17:40.247  ************************************
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.247    13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.247    13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:40.247   13:54:22 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:17:40.247  [2024-12-11 13:54:23.007626] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:40.507  
00:17:40.507  real	0m0.121s
00:17:40.507  user	0m0.059s
00:17:40.507  sys	0m0.063s
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x
00:17:40.507  ************************************
00:17:40.507  END TEST dd_no_input
00:17:40.507  ************************************
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:40.507  ************************************
00:17:40.507  START TEST dd_no_output
00:17:40.507  ************************************
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.507    13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.507    13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:17:40.507  [2024-12-11 13:54:23.209712] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:40.507  
00:17:40.507  real	0m0.150s
00:17:40.507  user	0m0.071s
00:17:40.507  sys	0m0.080s
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:40.507   13:54:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x
00:17:40.507  ************************************
00:17:40.507  END TEST dd_no_output
00:17:40.507  ************************************
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:40.767  ************************************
00:17:40.767  START TEST dd_wrong_blocksize
00:17:40.767  ************************************
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.767    13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.767    13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:17:40.767  [2024-12-11 13:54:23.421075] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:40.767  
00:17:40.767  real	0m0.157s
00:17:40.767  user	0m0.080s
00:17:40.767  sys	0m0.078s
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:40.767  ************************************
00:17:40.767  END TEST dd_wrong_blocksize
00:17:40.767  ************************************
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:40.767  ************************************
00:17:40.767  START TEST dd_smaller_blocksize
00:17:40.767  ************************************
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:40.767   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:40.767    13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:41.027   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:41.027    13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:41.027   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:41.027   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:41.027   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:41.027   13:54:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:17:41.027  [2024-12-11 13:54:23.634699] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:41.027  [2024-12-11 13:54:23.634890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79215 ]
00:17:41.287  [2024-12-11 13:54:23.835207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:41.287  [2024-12-11 13:54:24.017032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:42.226  EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list
00:17:42.485  EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list
00:17:42.485  [2024-12-11 13:54:25.061903] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value
00:17:42.485  [2024-12-11 13:54:25.061990] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:43.433  [2024-12-11 13:54:25.935840] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:43.433   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244
00:17:43.433   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:43.433   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116
00:17:43.433   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in
00:17:43.433   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1
00:17:43.433   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:43.433  
00:17:43.433  real	0m2.655s
00:17:43.434  user	0m1.763s
00:17:43.434  sys	0m0.791s
00:17:43.434   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:43.434   13:54:26 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x
00:17:43.434  ************************************
00:17:43.434  END TEST dd_smaller_blocksize
00:17:43.434  ************************************
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:43.693  ************************************
00:17:43.693  START TEST dd_invalid_count
00:17:43.693  ************************************
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.693    13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.693    13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:17:43.693  [2024-12-11 13:54:26.338852] spdk_dd.c:1515:main: *ERROR*: Invalid --count value
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:43.693   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:43.693  
00:17:43.694  real	0m0.157s
00:17:43.694  user	0m0.097s
00:17:43.694  sys	0m0.061s
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x
00:17:43.694  ************************************
00:17:43.694  END TEST dd_invalid_count
00:17:43.694  ************************************
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:43.694  ************************************
00:17:43.694  START TEST dd_invalid_oflag
00:17:43.694  ************************************
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:17:43.694   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.956    13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.956    13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:17:43.956  [2024-12-11 13:54:26.559613] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:43.956  
00:17:43.956  real	0m0.147s
00:17:43.956  user	0m0.082s
00:17:43.956  sys	0m0.065s
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:43.956  ************************************
00:17:43.956  END TEST dd_invalid_oflag
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x
00:17:43.956  ************************************
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:43.956  ************************************
00:17:43.956  START TEST dd_invalid_iflag
00:17:43.956  ************************************
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.956    13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.956    13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:43.956   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:17:44.224  [2024-12-11 13:54:26.770147] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if
00:17:44.224  ************************************
00:17:44.224  END TEST dd_invalid_iflag
00:17:44.224  ************************************
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:44.224  
00:17:44.224  real	0m0.148s
00:17:44.224  user	0m0.071s
00:17:44.224  sys	0m0.078s
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:44.224  ************************************
00:17:44.224  START TEST dd_unknown_flag
00:17:44.224  ************************************
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:44.224    13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:44.224    13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:44.224   13:54:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:17:44.224  [2024-12-11 13:54:26.980220] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:44.224  [2024-12-11 13:54:26.980411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79333 ]
00:17:44.483  [2024-12-11 13:54:27.183586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:44.742  [2024-12-11 13:54:27.357935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:45.000  
[2024-12-11T13:54:27.772Z] Copying: 0/0 [B] (average 0 Bps)[2024-12-11 13:54:27.730553] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1
00:17:45.000  [2024-12-11 13:54:27.730623] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:45.000  [2024-12-11 13:54:27.730869] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice
00:17:45.935  [2024-12-11 13:54:28.640330] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:46.194  
00:17:46.194  
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:46.194  
00:17:46.194  real	0m2.061s
00:17:46.194  user	0m1.671s
00:17:46.194  sys	0m0.275s
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:46.194   13:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x
00:17:46.194  ************************************
00:17:46.194  END TEST dd_unknown_flag
00:17:46.194  ************************************
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:46.458  ************************************
00:17:46.458  START TEST dd_invalid_json
00:17:46.458  ************************************
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:17:46.458    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # :
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:46.458    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:46.458    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:46.458   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:17:46.458  [2024-12-11 13:54:29.098356] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:46.458  [2024-12-11 13:54:29.098552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79373 ]
00:17:46.723  [2024-12-11 13:54:29.299125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:46.723  [2024-12-11 13:54:29.473607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:46.723  [2024-12-11 13:54:29.473736] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty
00:17:46.723  [2024-12-11 13:54:29.473762] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:17:46.723  [2024-12-11 13:54:29.473785] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:46.723  [2024-12-11 13:54:29.473866] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:46.981   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234
00:17:46.981   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:46.982   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106
00:17:46.982   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in
00:17:46.982   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1
00:17:46.982   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:46.982  
00:17:46.982  real	0m0.758s
00:17:46.982  user	0m0.496s
00:17:46.982  sys	0m0.162s
00:17:46.982   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:47.240   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x
00:17:47.240  ************************************
00:17:47.240  END TEST dd_invalid_json
00:17:47.240  ************************************
00:17:47.240   13:54:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek
00:17:47.240   13:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:47.241  ************************************
00:17:47.241  START TEST dd_invalid_seek
00:17:47.241  ************************************
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:47.241    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf
00:17:47.241    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable
00:17:47.241    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:47.241    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:47.241    13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:47.241   13:54:29 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:17:47.241  {
00:17:47.241    "subsystems": [
00:17:47.241      {
00:17:47.241        "subsystem": "bdev",
00:17:47.241        "config": [
00:17:47.241          {
00:17:47.241            "params": {
00:17:47.241              "block_size": 512,
00:17:47.241              "num_blocks": 512,
00:17:47.241              "name": "malloc0"
00:17:47.241            },
00:17:47.241            "method": "bdev_malloc_create"
00:17:47.241          },
00:17:47.241          {
00:17:47.241            "params": {
00:17:47.241              "block_size": 512,
00:17:47.241              "num_blocks": 512,
00:17:47.241              "name": "malloc1"
00:17:47.241            },
00:17:47.241            "method": "bdev_malloc_create"
00:17:47.241          },
00:17:47.241          {
00:17:47.241            "method": "bdev_wait_for_examine"
00:17:47.241          }
00:17:47.241        ]
00:17:47.241      }
00:17:47.241    ]
00:17:47.241  }
00:17:47.241  [2024-12-11 13:54:29.911513] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:47.241  [2024-12-11 13:54:29.911722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79409 ]
00:17:47.499  [2024-12-11 13:54:30.109480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:47.499  [2024-12-11 13:54:30.245693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:48.067  [2024-12-11 13:54:30.667958] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output
00:17:48.067  [2024-12-11 13:54:30.668050] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:49.004  [2024-12-11 13:54:31.643992] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:49.264  
00:17:49.264  real	0m2.110s
00:17:49.264  user	0m1.731s
00:17:49.264  sys	0m0.311s
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x
00:17:49.264  ************************************
00:17:49.264  END TEST dd_invalid_seek
00:17:49.264  ************************************
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:49.264  ************************************
00:17:49.264  START TEST dd_invalid_skip
00:17:49.264  ************************************
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:49.264    13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf
00:17:49.264    13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable
00:17:49.264    13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:49.264    13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:49.264    13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:49.264   13:54:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:17:49.264  {
00:17:49.264    "subsystems": [
00:17:49.264      {
00:17:49.264        "subsystem": "bdev",
00:17:49.264        "config": [
00:17:49.264          {
00:17:49.264            "params": {
00:17:49.264              "block_size": 512,
00:17:49.264              "num_blocks": 512,
00:17:49.264              "name": "malloc0"
00:17:49.264            },
00:17:49.264            "method": "bdev_malloc_create"
00:17:49.264          },
00:17:49.264          {
00:17:49.264            "params": {
00:17:49.264              "block_size": 512,
00:17:49.264              "num_blocks": 512,
00:17:49.264              "name": "malloc1"
00:17:49.264            },
00:17:49.264            "method": "bdev_malloc_create"
00:17:49.264          },
00:17:49.264          {
00:17:49.264            "method": "bdev_wait_for_examine"
00:17:49.264          }
00:17:49.264        ]
00:17:49.264      }
00:17:49.264    ]
00:17:49.264  }
00:17:49.541  [2024-12-11 13:54:32.095409] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:49.541  [2024-12-11 13:54:32.095701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79460 ]
00:17:49.541  [2024-12-11 13:54:32.316795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:49.835  [2024-12-11 13:54:32.487190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:50.402  [2024-12-11 13:54:32.904129] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input
00:17:50.402  [2024-12-11 13:54:32.904230] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:51.338  [2024-12-11 13:54:33.857744] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:51.595  
00:17:51.595  real	0m2.153s
00:17:51.595  user	0m1.786s
00:17:51.595  sys	0m0.293s
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x
00:17:51.595  ************************************
00:17:51.595  END TEST dd_invalid_skip
00:17:51.595  ************************************
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:51.595  ************************************
00:17:51.595  START TEST dd_invalid_input_count
00:17:51.595  ************************************
00:17:51.595   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0
00:17:51.596    13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:51.596    13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable
00:17:51.596    13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.596    13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.596    13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:51.596   13:54:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:17:51.596  {
00:17:51.596    "subsystems": [
00:17:51.596      {
00:17:51.596        "subsystem": "bdev",
00:17:51.596        "config": [
00:17:51.596          {
00:17:51.596            "params": {
00:17:51.596              "block_size": 512,
00:17:51.596              "num_blocks": 512,
00:17:51.596              "name": "malloc0"
00:17:51.596            },
00:17:51.596            "method": "bdev_malloc_create"
00:17:51.596          },
00:17:51.596          {
00:17:51.596            "params": {
00:17:51.596              "block_size": 512,
00:17:51.596              "num_blocks": 512,
00:17:51.596              "name": "malloc1"
00:17:51.596            },
00:17:51.596            "method": "bdev_malloc_create"
00:17:51.596          },
00:17:51.596          {
00:17:51.596            "method": "bdev_wait_for_examine"
00:17:51.596          }
00:17:51.596        ]
00:17:51.596      }
00:17:51.596    ]
00:17:51.596  }
00:17:51.596  [2024-12-11 13:54:34.277764] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:51.596  [2024-12-11 13:54:34.278027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79511 ]
00:17:51.854  [2024-12-11 13:54:34.488286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:52.112  [2024-12-11 13:54:34.665284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:52.369  [2024-12-11 13:54:35.157598] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input
00:17:52.369  [2024-12-11 13:54:35.157702] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:53.742  [2024-12-11 13:54:36.114825] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:53.742   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228
00:17:53.742   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:53.742   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:53.743  
00:17:53.743  real	0m2.223s
00:17:53.743  user	0m1.825s
00:17:53.743  sys	0m0.321s
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x
00:17:53.743  ************************************
00:17:53.743  END TEST dd_invalid_input_count
00:17:53.743  ************************************
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:53.743  ************************************
00:17:53.743  START TEST dd_invalid_output_count
00:17:53.743  ************************************
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:53.743    13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf
00:17:53.743    13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable
00:17:53.743    13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.743    13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.743    13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:53.743   13:54:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:17:53.743  {
00:17:53.743    "subsystems": [
00:17:53.743      {
00:17:53.743        "subsystem": "bdev",
00:17:53.743        "config": [
00:17:53.743          {
00:17:53.743            "params": {
00:17:53.743              "block_size": 512,
00:17:53.743              "num_blocks": 512,
00:17:53.743              "name": "malloc0"
00:17:53.743            },
00:17:53.743            "method": "bdev_malloc_create"
00:17:53.743          },
00:17:53.743          {
00:17:53.743            "method": "bdev_wait_for_examine"
00:17:53.743          }
00:17:53.743        ]
00:17:53.743      }
00:17:53.743    ]
00:17:53.743  }
00:17:54.002  [2024-12-11 13:54:36.560578] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:54.002  [2024-12-11 13:54:36.560792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79562 ]
00:17:54.002  [2024-12-11 13:54:36.757934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:54.261  [2024-12-11 13:54:36.904151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:54.520  [2024-12-11 13:54:37.303554] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output
00:17:54.520  [2024-12-11 13:54:37.303678] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:55.898  [2024-12-11 13:54:38.259537] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:55.898  
00:17:55.898  real	0m2.074s
00:17:55.898  user	0m1.698s
00:17:55.898  sys	0m0.302s
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x
00:17:55.898  ************************************
00:17:55.898  END TEST dd_invalid_output_count
00:17:55.898  ************************************
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:55.898  ************************************
00:17:55.898  START TEST dd_bs_not_multiple
00:17:55.898  ************************************
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:17:55.898    13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:55.898    13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable
00:17:55.898    13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:55.898    13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:55.898    13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:17:55.898   13:54:38 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:17:55.898  {
00:17:55.898    "subsystems": [
00:17:55.898      {
00:17:55.898        "subsystem": "bdev",
00:17:55.898        "config": [
00:17:55.898          {
00:17:55.898            "params": {
00:17:55.898              "block_size": 512,
00:17:55.898              "num_blocks": 512,
00:17:55.898              "name": "malloc0"
00:17:55.898            },
00:17:55.898            "method": "bdev_malloc_create"
00:17:55.898          },
00:17:55.898          {
00:17:55.898            "params": {
00:17:55.898              "block_size": 512,
00:17:55.898              "num_blocks": 512,
00:17:55.898              "name": "malloc1"
00:17:55.898            },
00:17:55.898            "method": "bdev_malloc_create"
00:17:55.898          },
00:17:55.898          {
00:17:55.898            "method": "bdev_wait_for_examine"
00:17:55.898          }
00:17:55.898        ]
00:17:55.898      }
00:17:55.898    ]
00:17:55.898  }
00:17:56.158  [2024-12-11 13:54:38.701094] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:56.158  [2024-12-11 13:54:38.701287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79610 ]
00:17:56.158  [2024-12-11 13:54:38.901248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:56.416  [2024-12-11 13:54:39.036563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:17:56.675  [2024-12-11 13:54:39.436453] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512)
00:17:56.675  [2024-12-11 13:54:39.436800] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:17:58.051  [2024-12-11 13:54:40.417162] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:58.051  
00:17:58.051  real	0m2.090s
00:17:58.051  user	0m1.691s
00:17:58.051  sys	0m0.327s
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:58.051  ************************************
00:17:58.051  END TEST dd_bs_not_multiple
00:17:58.051  ************************************
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x
00:17:58.051  ************************************
00:17:58.051  END TEST spdk_dd_negative
00:17:58.051  ************************************
00:17:58.051  
00:17:58.051  real	0m18.647s
00:17:58.051  user	0m13.769s
00:17:58.051  sys	0m4.233s
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:58.051   13:54:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:17:58.051  ************************************
00:17:58.051  END TEST spdk_dd
00:17:58.051  ************************************
00:17:58.051  
00:17:58.051  real	2m59.892s
00:17:58.051  user	2m19.991s
00:17:58.051  sys	0m30.323s
00:17:58.051   13:54:40 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:58.051   13:54:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:17:58.311   13:54:40  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:17:58.311   13:54:40  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:17:58.311   13:54:40  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:58.311   13:54:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:58.311   13:54:40  -- common/autotest_common.sh@10 -- # set +x
00:17:58.311  ************************************
00:17:58.311  START TEST blockdev_nvme
00:17:58.311  ************************************
00:17:58.311   13:54:40 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:17:58.311  * Looking for test storage...
00:17:58.311  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:17:58.311    13:54:40 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:58.311     13:54:40 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:17:58.311     13:54:40 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:58.311    13:54:41 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:58.311     13:54:41 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:58.311    13:54:41 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:17:58.311    13:54:41 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:58.311    13:54:41 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:58.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.311  		--rc genhtml_branch_coverage=1
00:17:58.311  		--rc genhtml_function_coverage=1
00:17:58.311  		--rc genhtml_legend=1
00:17:58.311  		--rc geninfo_all_blocks=1
00:17:58.311  		--rc geninfo_unexecuted_blocks=1
00:17:58.311  		
00:17:58.311  		'
00:17:58.311    13:54:41 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:58.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.311  		--rc genhtml_branch_coverage=1
00:17:58.311  		--rc genhtml_function_coverage=1
00:17:58.311  		--rc genhtml_legend=1
00:17:58.311  		--rc geninfo_all_blocks=1
00:17:58.311  		--rc geninfo_unexecuted_blocks=1
00:17:58.311  		
00:17:58.311  		'
00:17:58.311    13:54:41 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:58.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.311  		--rc genhtml_branch_coverage=1
00:17:58.311  		--rc genhtml_function_coverage=1
00:17:58.311  		--rc genhtml_legend=1
00:17:58.311  		--rc geninfo_all_blocks=1
00:17:58.311  		--rc geninfo_unexecuted_blocks=1
00:17:58.311  		
00:17:58.311  		'
00:17:58.311    13:54:41 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:58.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.311  		--rc genhtml_branch_coverage=1
00:17:58.311  		--rc genhtml_function_coverage=1
00:17:58.311  		--rc genhtml_legend=1
00:17:58.311  		--rc geninfo_all_blocks=1
00:17:58.311  		--rc geninfo_unexecuted_blocks=1
00:17:58.311  		
00:17:58.311  		'
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:17:58.311    13:54:41 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:17:58.311    13:54:41 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device=
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek=
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx=
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]]
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]]
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79728
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:17:58.311   13:54:41 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 79728
00:17:58.312   13:54:41 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 79728 ']'
00:17:58.312   13:54:41 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:58.312   13:54:41 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:58.312   13:54:41 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:58.312  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:58.312   13:54:41 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:58.312   13:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:17:58.571  [2024-12-11 13:54:41.179754] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:17:58.571  [2024-12-11 13:54:41.180741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79728 ]
00:17:58.829  [2024-12-11 13:54:41.378251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:58.829  [2024-12-11 13:54:41.525080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:00.204   13:54:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:18:00.204    13:54:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:00.204   13:54:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:18:00.204    13:54:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name
00:18:00.205    13:54:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "46b48452-ed5e-450d-837c-126bf7fc38c0"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "46b48452-ed5e-450d-837c-126bf7fc38c0",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:18:00.205   13:54:42 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:18:00.205   13:54:42 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1
00:18:00.205   13:54:42 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:18:00.205   13:54:42 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 79728
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 79728 ']'
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 79728
00:18:00.205    13:54:42 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:00.205    13:54:42 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79728
00:18:00.205  killing process with pid 79728
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79728'
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 79728
00:18:00.205   13:54:42 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 79728
00:18:03.487   13:54:45 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:03.487   13:54:45 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:18:03.487   13:54:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:18:03.487   13:54:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:03.487   13:54:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:03.487  ************************************
00:18:03.487  START TEST bdev_hello_world
00:18:03.487  ************************************
00:18:03.487   13:54:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:18:03.487  [2024-12-11 13:54:45.696844] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:03.487  [2024-12-11 13:54:45.697708] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79810 ]
00:18:03.487  [2024-12-11 13:54:45.896984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:03.488  [2024-12-11 13:54:46.040048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:04.054  [2024-12-11 13:54:46.605812] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:18:04.054  [2024-12-11 13:54:46.605876] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:18:04.054  [2024-12-11 13:54:46.605925] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:18:04.054  [2024-12-11 13:54:46.609470] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:18:04.054  [2024-12-11 13:54:46.610214] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:18:04.054  [2024-12-11 13:54:46.610254] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:18:04.054  [2024-12-11 13:54:46.610517] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:18:04.054  
00:18:04.054  [2024-12-11 13:54:46.610551] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:18:05.463  
00:18:05.463  real	0m2.329s
00:18:05.463  user	0m1.933s
00:18:05.463  sys	0m0.295s
00:18:05.463   13:54:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:05.463   13:54:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:18:05.463  ************************************
00:18:05.463  END TEST bdev_hello_world
00:18:05.463  ************************************
00:18:05.463   13:54:47 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:18:05.463   13:54:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:05.463   13:54:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:05.463   13:54:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:05.463  ************************************
00:18:05.463  START TEST bdev_bounds
00:18:05.463  ************************************
00:18:05.463   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:18:05.463   13:54:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=79852
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:18:05.464  Process bdevio pid: 79852
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 79852'
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 79852
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 79852 ']'
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:05.464  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:05.464   13:54:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:05.464  [2024-12-11 13:54:48.071629] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:05.464  [2024-12-11 13:54:48.071795] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79852 ]
00:18:05.464  [2024-12-11 13:54:48.253489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:18:05.722  [2024-12-11 13:54:48.397798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:05.722  [2024-12-11 13:54:48.397948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:05.722  [2024-12-11 13:54:48.397977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:18:06.288   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:06.288   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:18:06.288   13:54:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:18:06.546  I/O targets:
00:18:06.546    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:18:06.546  
00:18:06.546  
00:18:06.546       CUnit - A unit testing framework for C - Version 2.1-3
00:18:06.546       http://cunit.sourceforge.net/
00:18:06.546  
00:18:06.546  
00:18:06.546  Suite: bdevio tests on: Nvme0n1
00:18:06.546    Test: blockdev write read block ...passed
00:18:06.546    Test: blockdev write zeroes read block ...passed
00:18:06.546    Test: blockdev write zeroes read no split ...passed
00:18:06.546    Test: blockdev write zeroes read split ...passed
00:18:06.546    Test: blockdev write zeroes read split partial ...passed
00:18:06.546    Test: blockdev reset ...[2024-12-11 13:54:49.275065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:18:06.546  [2024-12-11 13:54:49.279507] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:18:06.546  passed
00:18:06.546    Test: blockdev write read 8 blocks ...passed
00:18:06.546    Test: blockdev write read size > 128k ...passed
00:18:06.546    Test: blockdev write read invalid size ...passed
00:18:06.546    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:06.546    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:06.546    Test: blockdev write read max offset ...passed
00:18:06.546    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:06.546    Test: blockdev writev readv 8 blocks ...passed
00:18:06.546    Test: blockdev writev readv 30 x 1block ...passed
00:18:06.546    Test: blockdev writev readv block ...passed
00:18:06.546    Test: blockdev writev readv size > 128k ...passed
00:18:06.546    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:06.546    Test: blockdev comparev and writev ...[2024-12-11 13:54:49.291066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd00d000 len:0x1000
00:18:06.546  [2024-12-11 13:54:49.291141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:18:06.546  passed
00:18:06.546    Test: blockdev nvme passthru rw ...passed
00:18:06.546    Test: blockdev nvme passthru vendor specific ...[2024-12-11 13:54:49.292006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:18:06.546  passed
00:18:06.547    Test: blockdev nvme admin passthru ...[2024-12-11 13:54:49.292146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:18:06.547  passed
00:18:06.547    Test: blockdev copy ...passed
00:18:06.547  
00:18:06.547  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:18:06.547                suites      1      1    n/a      0        0
00:18:06.547                 tests     23     23     23      0        0
00:18:06.547               asserts    152    152    152      0      n/a
00:18:06.547  
00:18:06.547  Elapsed time =    0.279 seconds
00:18:06.547  0
00:18:06.547   13:54:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 79852
00:18:06.547   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 79852 ']'
00:18:06.547   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 79852
00:18:06.547    13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:18:06.547   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:06.547    13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79852
00:18:06.805  killing process with pid 79852
00:18:06.805   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:06.805   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:06.805   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79852'
00:18:06.805   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 79852
00:18:06.805   13:54:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 79852
00:18:08.182   13:54:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:18:08.182  
00:18:08.182  real	0m2.580s
00:18:08.182  user	0m6.533s
00:18:08.182  sys	0m0.414s
00:18:08.182   13:54:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:08.182   13:54:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:08.182  ************************************
00:18:08.182  END TEST bdev_bounds
00:18:08.182  ************************************
00:18:08.182   13:54:50 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:18:08.182   13:54:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:18:08.182   13:54:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:08.182   13:54:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:08.182  ************************************
00:18:08.182  START TEST bdev_nbd
00:18:08.182  ************************************
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:18:08.182    13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1')
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0')
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1')
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=79917
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 79917 /var/tmp/spdk-nbd.sock
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 79917 ']'
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:18:08.182  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:08.182   13:54:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:08.182  [2024-12-11 13:54:50.749543] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:08.182  [2024-12-11 13:54:50.750107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:08.182  [2024-12-11 13:54:50.953871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:08.441  [2024-12-11 13:54:51.095670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:18:09.007   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:18:09.007    13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:18:09.266    13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:09.266  1+0 records in
00:18:09.266  1+0 records out
00:18:09.266  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600325 s, 6.8 MB/s
00:18:09.266    13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:09.266   13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:18:09.266    13:54:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:18:09.524    {
00:18:09.524      "nbd_device": "/dev/nbd0",
00:18:09.524      "bdev_name": "Nvme0n1"
00:18:09.524    }
00:18:09.524  ]'
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:18:09.524    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:18:09.524    {
00:18:09.524      "nbd_device": "/dev/nbd0",
00:18:09.524      "bdev_name": "Nvme0n1"
00:18:09.524    }
00:18:09.524  ]'
00:18:09.524    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:09.524   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:09.782    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:09.782   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:09.782    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:09.782    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:09.782     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:10.040    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:10.040     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:10.040     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:10.040    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:10.040     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:10.040     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:10.040     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:10.040    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:10.040    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:10.040   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:18:10.040   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:18:10.040   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:18:10.040   13:54:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:18:10.040   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:10.040   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:10.041   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:18:10.298  /dev/nbd0
00:18:10.298    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:10.298  1+0 records in
00:18:10.298  1+0 records out
00:18:10.298  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596046 s, 6.9 MB/s
00:18:10.298    13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:10.298   13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:10.298    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:10.298    13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:10.298     13:54:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:10.556    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:18:10.556    {
00:18:10.556      "nbd_device": "/dev/nbd0",
00:18:10.556      "bdev_name": "Nvme0n1"
00:18:10.556    }
00:18:10.556  ]'
00:18:10.556     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:18:10.556    {
00:18:10.556      "nbd_device": "/dev/nbd0",
00:18:10.556      "bdev_name": "Nvme0n1"
00:18:10.556    }
00:18:10.556  ]'
00:18:10.556     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:10.556    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:18:10.556     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:10.556     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:18:10.556    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1
00:18:10.556    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:18:10.556  256+0 records in
00:18:10.556  256+0 records out
00:18:10.556  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00814175 s, 129 MB/s
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:18:10.556  256+0 records in
00:18:10.556  256+0 records out
00:18:10.556  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562462 s, 18.6 MB/s
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:10.556   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:10.557   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:10.815    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:10.815   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:10.815    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:10.815    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:10.815     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:11.073    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:11.073     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:11.073     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:11.073    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:11.073     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:11.073     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:11.073     13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:11.073    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:11.073    13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:11.073   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:18:11.073   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:18:11.073   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:18:11.074   13:54:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:11.074   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:11.074   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:18:11.074   13:54:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:18:11.332  malloc_lvol_verify
00:18:11.332   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:18:11.899  b9305161-5ac1-44e9-b4f3-84ce18105119
00:18:11.899   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:18:11.899  0181f728-011a-4294-af90-907d08a8b992
00:18:11.899   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:18:12.158  /dev/nbd0
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:18:12.158  mke2fs 1.47.0 (5-Feb-2023)
00:18:12.158  
00:18:12.158  Filesystem too small for a journal
00:18:12.158  Discarding device blocks:    0/1024         done                            
00:18:12.158  Creating filesystem with 1024 4k blocks and 1024 inodes
00:18:12.158  
00:18:12.158  Allocating group tables: 0/1   done                            
00:18:12.158  Writing inode tables: 0/1   done                            
00:18:12.158  Writing superblocks and filesystem accounting information: 0/1   done
00:18:12.158  
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:12.158   13:54:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:12.427    13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 79917
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 79917 ']'
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 79917
00:18:12.427    13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:12.427    13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79917
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:12.427  killing process with pid 79917
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79917'
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 79917
00:18:12.427   13:54:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 79917
00:18:14.328   13:54:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:18:14.328  
00:18:14.328  real	0m5.962s
00:18:14.328  user	0m8.206s
00:18:14.328  sys	0m1.481s
00:18:14.328   13:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:14.328  ************************************
00:18:14.328  END TEST bdev_nbd
00:18:14.328  ************************************
00:18:14.328   13:54:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:14.328   13:54:56 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:18:14.328   13:54:56 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']'
00:18:14.328  skipping fio tests on NVMe due to multi-ns failures.
00:18:14.328   13:54:56 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:18:14.328   13:54:56 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:14.329   13:54:56 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:18:14.329   13:54:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:18:14.329   13:54:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:14.329   13:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:14.329  ************************************
00:18:14.329  START TEST bdev_verify
00:18:14.329  ************************************
00:18:14.329   13:54:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:18:14.329  [2024-12-11 13:54:56.766352] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:14.329  [2024-12-11 13:54:56.766542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80104 ]
00:18:14.329  [2024-12-11 13:54:56.960655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:18:14.329  [2024-12-11 13:54:57.093726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:14.329  [2024-12-11 13:54:57.093759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:14.896  Running I/O for 5 seconds...
00:18:16.842      18688.00 IOPS,    73.00 MiB/s
[2024-12-11T13:55:00.990Z]     18848.00 IOPS,    73.62 MiB/s
[2024-12-11T13:55:01.945Z]     19520.00 IOPS,    76.25 MiB/s
[2024-12-11T13:55:02.882Z]     19648.00 IOPS,    76.75 MiB/s
[2024-12-11T13:55:02.882Z]     19251.20 IOPS,    75.20 MiB/s
00:18:20.110                                                                                                  Latency(us)
00:18:20.110  
[2024-12-11T13:55:02.882Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:20.110  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:18:20.110  	 Verification LBA range: start 0x0 length 0xa0000
00:18:20.110  	 Nvme0n1             :       5.01    9609.62      37.54       0.00     0.00   13241.64     986.94   22219.82
00:18:20.110  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:18:20.110  	 Verification LBA range: start 0xa0000 length 0xa0000
00:18:20.110  	 Nvme0n1             :       5.01    9622.79      37.59       0.00     0.00   13223.81    1435.55   23592.96
00:18:20.110  
[2024-12-11T13:55:02.882Z]  ===================================================================================================================
00:18:20.110  
[2024-12-11T13:55:02.882Z]  Total                       :              19232.41      75.13       0.00     0.00   13232.72     986.94   23592.96
00:18:22.011  
00:18:22.011  real	0m7.591s
00:18:22.011  user	0m14.067s
00:18:22.011  sys	0m0.302s
00:18:22.011   13:55:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:22.011   13:55:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:18:22.011  ************************************
00:18:22.011  END TEST bdev_verify
00:18:22.011  ************************************
00:18:22.011   13:55:04 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:18:22.011   13:55:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:18:22.011   13:55:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:22.011   13:55:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:22.011  ************************************
00:18:22.011  START TEST bdev_verify_big_io
00:18:22.011  ************************************
00:18:22.011   13:55:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:18:22.011  [2024-12-11 13:55:04.421573] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:22.011  [2024-12-11 13:55:04.421784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80199 ]
00:18:22.011  [2024-12-11 13:55:04.627360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:18:22.270  [2024-12-11 13:55:04.816214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:22.270  [2024-12-11 13:55:04.816242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:22.835  Running I/O for 5 seconds...
00:18:24.706       1441.00 IOPS,    90.06 MiB/s
[2024-12-11T13:55:08.857Z]      1626.50 IOPS,   101.66 MiB/s
[2024-12-11T13:55:09.794Z]      1729.00 IOPS,   108.06 MiB/s
[2024-12-11T13:55:10.730Z]      1765.50 IOPS,   110.34 MiB/s
[2024-12-11T13:55:10.730Z]      1739.20 IOPS,   108.70 MiB/s
00:18:27.958                                                                                                  Latency(us)
00:18:27.958  
[2024-12-11T13:55:10.730Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:27.958  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:18:27.958  	 Verification LBA range: start 0x0 length 0xa000
00:18:27.958  	 Nvme0n1             :       5.08     881.43      55.09       0.00     0.00  141755.72    1583.79  263641.97
00:18:27.958  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:18:27.958  	 Verification LBA range: start 0xa000 length 0xa000
00:18:27.958  	 Nvme0n1             :       5.08     875.38      54.71       0.00     0.00  142802.42     854.31  307582.29
00:18:27.958  
[2024-12-11T13:55:10.730Z]  ===================================================================================================================
00:18:27.958  
[2024-12-11T13:55:10.730Z]  Total                       :               1756.81     109.80       0.00     0.00  142277.31     854.31  307582.29
00:18:29.863  
00:18:29.863  real	0m7.851s
00:18:29.863  user	0m14.486s
00:18:29.863  sys	0m0.334s
00:18:29.863   13:55:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:29.863   13:55:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:18:29.863  ************************************
00:18:29.863  END TEST bdev_verify_big_io
00:18:29.863  ************************************
00:18:29.863   13:55:12 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:18:29.863   13:55:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:18:29.863   13:55:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:29.863   13:55:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:29.863  ************************************
00:18:29.863  START TEST bdev_write_zeroes
00:18:29.863  ************************************
00:18:29.863   13:55:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:18:29.863  [2024-12-11 13:55:12.326933] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:29.863  [2024-12-11 13:55:12.327131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80303 ]
00:18:29.863  [2024-12-11 13:55:12.522333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:29.863  [2024-12-11 13:55:12.653672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:30.431  Running I/O for 1 seconds...
00:18:31.802      49118.00 IOPS,   191.87 MiB/s
00:18:31.802                                                                                                  Latency(us)
00:18:31.802  
[2024-12-11T13:55:14.574Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:31.802  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:18:31.802  	 Nvme0n1             :       1.01   49025.65     191.51       0.00     0.00    2603.88     951.83    8925.38
00:18:31.802  
[2024-12-11T13:55:14.574Z]  ===================================================================================================================
00:18:31.802  
[2024-12-11T13:55:14.574Z]  Total                       :              49025.65     191.51       0.00     0.00    2603.88     951.83    8925.38
00:18:32.736  
00:18:32.736  real	0m3.208s
00:18:32.736  user	0m2.832s
00:18:32.736  sys	0m0.276s
00:18:32.736  ************************************
00:18:32.736  END TEST bdev_write_zeroes
00:18:32.736  ************************************
00:18:32.736   13:55:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:32.736   13:55:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:18:32.736   13:55:15 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:18:32.736   13:55:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:18:32.736   13:55:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:32.736   13:55:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:32.736  ************************************
00:18:32.736  START TEST bdev_json_nonenclosed
00:18:32.736  ************************************
00:18:32.736   13:55:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:18:32.993  [2024-12-11 13:55:15.599966] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:32.993  [2024-12-11 13:55:15.600151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80351 ]
00:18:33.250  [2024-12-11 13:55:15.801821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:33.250  [2024-12-11 13:55:15.997832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:33.250  [2024-12-11 13:55:15.997980] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:18:33.250  [2024-12-11 13:55:15.998022] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:18:33.250  [2024-12-11 13:55:15.998042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:18:33.817  
00:18:33.817  real	0m0.812s
00:18:33.817  user	0m0.554s
00:18:33.817  sys	0m0.156s
00:18:33.817   13:55:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:33.817   13:55:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:18:33.817  ************************************
00:18:33.817  END TEST bdev_json_nonenclosed
00:18:33.817  ************************************
00:18:33.817   13:55:16 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:18:33.817   13:55:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:18:33.817   13:55:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:33.817   13:55:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:33.817  ************************************
00:18:33.817  START TEST bdev_json_nonarray
00:18:33.817  ************************************
00:18:33.817   13:55:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:18:33.817  [2024-12-11 13:55:16.483467] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:33.817  [2024-12-11 13:55:16.483711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80382 ]
00:18:34.076  [2024-12-11 13:55:16.696137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:34.335  [2024-12-11 13:55:16.896447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:34.335  [2024-12-11 13:55:16.896580] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:18:34.335  [2024-12-11 13:55:16.896611] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:18:34.335  [2024-12-11 13:55:16.896647] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:18:34.594  
00:18:34.594  real	0m0.862s
00:18:34.594  user	0m0.579s
00:18:34.594  sys	0m0.182s
00:18:34.594   13:55:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:34.594   13:55:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:18:34.594  ************************************
00:18:34.594  END TEST bdev_json_nonarray
00:18:34.594  ************************************
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]]
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]]
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]]
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:18:34.594   13:55:17 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:18:34.594  ************************************
00:18:34.594  END TEST blockdev_nvme
00:18:34.594  ************************************
00:18:34.594  
00:18:34.594  real	0m36.452s
00:18:34.594  user	0m53.926s
00:18:34.594  sys	0m4.623s
00:18:34.594   13:55:17 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:34.594   13:55:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:18:34.594    13:55:17  -- spdk/autotest.sh@209 -- # uname -s
00:18:34.594   13:55:17  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:18:34.594   13:55:17  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:18:34.594   13:55:17  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:34.594   13:55:17  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:34.594   13:55:17  -- common/autotest_common.sh@10 -- # set +x
00:18:34.853  ************************************
00:18:34.853  START TEST blockdev_nvme_gpt
00:18:34.853  ************************************
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:18:34.854  * Looking for test storage...
00:18:34.854  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:34.854     13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version
00:18:34.854     13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:34.854     13:55:17 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:34.854    13:55:17 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:34.854  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.854  		--rc genhtml_branch_coverage=1
00:18:34.854  		--rc genhtml_function_coverage=1
00:18:34.854  		--rc genhtml_legend=1
00:18:34.854  		--rc geninfo_all_blocks=1
00:18:34.854  		--rc geninfo_unexecuted_blocks=1
00:18:34.854  		
00:18:34.854  		'
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:34.854  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.854  		--rc genhtml_branch_coverage=1
00:18:34.854  		--rc genhtml_function_coverage=1
00:18:34.854  		--rc genhtml_legend=1
00:18:34.854  		--rc geninfo_all_blocks=1
00:18:34.854  		--rc geninfo_unexecuted_blocks=1
00:18:34.854  		
00:18:34.854  		'
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:34.854  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.854  		--rc genhtml_branch_coverage=1
00:18:34.854  		--rc genhtml_function_coverage=1
00:18:34.854  		--rc genhtml_legend=1
00:18:34.854  		--rc geninfo_all_blocks=1
00:18:34.854  		--rc geninfo_unexecuted_blocks=1
00:18:34.854  		
00:18:34.854  		'
00:18:34.854    13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:34.854  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.854  		--rc genhtml_branch_coverage=1
00:18:34.854  		--rc genhtml_function_coverage=1
00:18:34.854  		--rc genhtml_legend=1
00:18:34.854  		--rc geninfo_all_blocks=1
00:18:34.854  		--rc geninfo_unexecuted_blocks=1
00:18:34.854  		
00:18:34.854  		'
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:18:34.854    13:55:17 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:18:34.854    13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s
00:18:34.854  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device=
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek=
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx=
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]]
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]]
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80466
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 80466
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 80466 ']'
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:34.854   13:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:34.854   13:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:35.114  [2024-12-11 13:55:17.715900] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:35.114  [2024-12-11 13:55:17.717045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80466 ]
00:18:35.374  [2024-12-11 13:55:17.930179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:35.374  [2024-12-11 13:55:18.119874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:36.751   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:36.751   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:18:36.751   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:18:36.751   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf
00:18:36.751   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:18:37.010  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:18:37.010  Waiting for block devices as requested
00:18:37.010  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:18:37.268   13:55:19 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1')
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:18:37.268    13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:18:37.268  BYT;
00:18:37.268  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:18:37.268  BYT;
00:18:37.268  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:18:37.268   13:55:19 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:18:37.268    13:55:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:18:37.268     13:55:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:18:37.268   13:55:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:18:37.268    13:55:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:18:37.268    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:18:37.268     13:55:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:18:37.269    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:18:37.269    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:18:37.269    13:55:20 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:18:37.269   13:55:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:18:37.269   13:55:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:18:38.654  The operation has completed successfully.
00:18:38.654   13:55:21 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:18:39.592  The operation has completed successfully.
00:18:39.592   13:55:22 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:18:39.851  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:18:40.110  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:18:40.678   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:18:40.678   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.678   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.678  []
00:18:40.678   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.678   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:18:40.678   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:18:40.678   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:18:40.678    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:18:40.678   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:18:40.678   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.678   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name
00:18:40.938    13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1p1
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:18:40.938   13:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 80466
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 80466 ']'
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 80466
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:40.938    13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80466
00:18:40.938  killing process with pid 80466
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80466'
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 80466
00:18:40.938   13:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 80466
00:18:43.475   13:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:43.475   13:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:18:43.475   13:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:18:43.475   13:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:43.475   13:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:43.475  ************************************
00:18:43.475  START TEST bdev_hello_world
00:18:43.475  ************************************
00:18:43.475   13:55:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:18:43.475  [2024-12-11 13:55:26.253680] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:43.475  [2024-12-11 13:55:26.253883] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80870 ]
00:18:43.735  [2024-12-11 13:55:26.448012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:43.994  [2024-12-11 13:55:26.583359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:44.563  [2024-12-11 13:55:27.107199] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:18:44.563  [2024-12-11 13:55:27.107526] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:18:44.563  [2024-12-11 13:55:27.107577] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:18:44.563  [2024-12-11 13:55:27.111198] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:18:44.563  [2024-12-11 13:55:27.111710] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:18:44.563  [2024-12-11 13:55:27.111747] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:18:44.563  [2024-12-11 13:55:27.111941] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:18:44.563  
00:18:44.563  [2024-12-11 13:55:27.111970] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:18:45.966  
00:18:45.966  real	0m2.204s
00:18:45.966  user	0m1.835s
00:18:45.966  sys	0m0.269s
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:18:45.966  ************************************
00:18:45.966  END TEST bdev_hello_world
00:18:45.966  ************************************
00:18:45.966   13:55:28 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:18:45.966   13:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:45.966   13:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:45.966   13:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:45.966  ************************************
00:18:45.966  START TEST bdev_bounds
00:18:45.966  ************************************
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:18:45.966  Process bdevio pid: 80918
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=80918
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 80918'
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 80918
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 80918 ']'
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:45.966   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:45.966  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:45.967   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:45.967   13:55:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:45.967  [2024-12-11 13:55:28.528840] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:45.967  [2024-12-11 13:55:28.529337] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80918 ]
00:18:45.967  [2024-12-11 13:55:28.732305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:18:46.226  [2024-12-11 13:55:28.901716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:46.226  [2024-12-11 13:55:28.901851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:46.226  [2024-12-11 13:55:28.901882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:18:46.794   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:46.794   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:18:46.794   13:55:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:18:47.053  I/O targets:
00:18:47.053    Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:18:47.053    Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:18:47.053  
00:18:47.053  
00:18:47.053       CUnit - A unit testing framework for C - Version 2.1-3
00:18:47.053       http://cunit.sourceforge.net/
00:18:47.053  
00:18:47.053  
00:18:47.053  Suite: bdevio tests on: Nvme0n1p2
00:18:47.053    Test: blockdev write read block ...passed
00:18:47.053    Test: blockdev write zeroes read block ...passed
00:18:47.053    Test: blockdev write zeroes read no split ...passed
00:18:47.053    Test: blockdev write zeroes read split ...passed
00:18:47.053    Test: blockdev write zeroes read split partial ...passed
00:18:47.053    Test: blockdev reset ...[2024-12-11 13:55:29.684720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:18:47.053  [2024-12-11 13:55:29.688464] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:18:47.053  passed
00:18:47.053    Test: blockdev write read 8 blocks ...passed
00:18:47.053    Test: blockdev write read size > 128k ...passed
00:18:47.053    Test: blockdev write read invalid size ...passed
00:18:47.053    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:47.053    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:47.053    Test: blockdev write read max offset ...passed
00:18:47.053    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:47.053    Test: blockdev writev readv 8 blocks ...passed
00:18:47.053    Test: blockdev writev readv 30 x 1block ...passed
00:18:47.053    Test: blockdev writev readv block ...passed
00:18:47.053    Test: blockdev writev readv size > 128k ...passed
00:18:47.053    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:47.053    Test: blockdev comparev and writev ...[2024-12-11 13:55:29.698775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2adc0d000 len:0x1000
00:18:47.053  [2024-12-11 13:55:29.698853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:18:47.053  passed
00:18:47.053    Test: blockdev nvme passthru rw ...passed
00:18:47.053    Test: blockdev nvme passthru vendor specific ...passed
00:18:47.053    Test: blockdev nvme admin passthru ...passed
00:18:47.053    Test: blockdev copy ...passed
00:18:47.053  Suite: bdevio tests on: Nvme0n1p1
00:18:47.053    Test: blockdev write read block ...passed
00:18:47.053    Test: blockdev write zeroes read block ...passed
00:18:47.053    Test: blockdev write zeroes read no split ...passed
00:18:47.053    Test: blockdev write zeroes read split ...passed
00:18:47.053    Test: blockdev write zeroes read split partial ...passed
00:18:47.053    Test: blockdev reset ...[2024-12-11 13:55:29.777429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:18:47.053  passed
00:18:47.053    Test: blockdev write read 8 blocks ...[2024-12-11 13:55:29.781358] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:18:47.053  passed
00:18:47.053    Test: blockdev write read size > 128k ...passed
00:18:47.053    Test: blockdev write read invalid size ...passed
00:18:47.053    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:47.053    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:47.053    Test: blockdev write read max offset ...passed
00:18:47.053    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:47.053    Test: blockdev writev readv 8 blocks ...passed
00:18:47.053    Test: blockdev writev readv 30 x 1block ...passed
00:18:47.053    Test: blockdev writev readv block ...passed
00:18:47.053    Test: blockdev writev readv size > 128k ...passed
00:18:47.053    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:47.053    Test: blockdev comparev and writev ...[2024-12-11 13:55:29.790631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2adc09000 len:0x1000
00:18:47.053  [2024-12-11 13:55:29.790727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:18:47.053  passed
00:18:47.053    Test: blockdev nvme passthru rw ...passed
00:18:47.053    Test: blockdev nvme passthru vendor specific ...passed
00:18:47.053    Test: blockdev nvme admin passthru ...passed
00:18:47.053    Test: blockdev copy ...passed
00:18:47.053  
00:18:47.053  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:18:47.053                suites      2      2    n/a      0        0
00:18:47.053                 tests     46     46     46      0        0
00:18:47.053               asserts    284    284    284      0      n/a
00:18:47.053  
00:18:47.053  Elapsed time =    0.514 seconds
00:18:47.053  0
00:18:47.053   13:55:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 80918
00:18:47.053   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 80918 ']'
00:18:47.053   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 80918
00:18:47.053    13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:18:47.053   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:47.053    13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80918
00:18:47.312   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:47.312   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:47.312   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80918'
00:18:47.312  killing process with pid 80918
00:18:47.312   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 80918
00:18:47.312   13:55:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 80918
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:18:48.690  
00:18:48.690  real	0m2.650s
00:18:48.690  user	0m6.589s
00:18:48.690  sys	0m0.445s
00:18:48.690  ************************************
00:18:48.690  END TEST bdev_bounds
00:18:48.690  ************************************
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:48.690   13:55:31 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:18:48.690   13:55:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:18:48.690   13:55:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:48.690   13:55:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:48.690  ************************************
00:18:48.690  START TEST bdev_nbd
00:18:48.690  ************************************
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:18:48.690    13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:18:48.690   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=80976
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 80976 /var/tmp/spdk-nbd.sock
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 80976 ']'
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:18:48.691  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:48.691   13:55:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:48.691  [2024-12-11 13:55:31.255022] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:48.691  [2024-12-11 13:55:31.255476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:48.691  [2024-12-11 13:55:31.452731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:48.949  [2024-12-11 13:55:31.609002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:18:49.515   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:18:49.515    13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:18:49.786    13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:49.786  1+0 records in
00:18:49.786  1+0 records out
00:18:49.786  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562451 s, 7.3 MB/s
00:18:49.786    13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:49.786   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:18:49.786    13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:18:50.353    13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:50.353  1+0 records in
00:18:50.353  1+0 records out
00:18:50.353  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649841 s, 6.3 MB/s
00:18:50.353    13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:50.353   13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:18:50.353    13:55:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:50.353   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:18:50.353    {
00:18:50.353      "nbd_device": "/dev/nbd0",
00:18:50.353      "bdev_name": "Nvme0n1p1"
00:18:50.353    },
00:18:50.353    {
00:18:50.353      "nbd_device": "/dev/nbd1",
00:18:50.353      "bdev_name": "Nvme0n1p2"
00:18:50.353    }
00:18:50.353  ]'
00:18:50.353   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:18:50.353    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:18:50.353    {
00:18:50.353      "nbd_device": "/dev/nbd0",
00:18:50.353      "bdev_name": "Nvme0n1p1"
00:18:50.353    },
00:18:50.353    {
00:18:50.353      "nbd_device": "/dev/nbd1",
00:18:50.353      "bdev_name": "Nvme0n1p2"
00:18:50.353    }
00:18:50.353  ]'
00:18:50.353    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:50.611    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:50.611   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:51.178     13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:51.178     13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:51.178     13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:51.178     13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:51.178     13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:51.178     13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:51.178    13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:18:51.178   13:55:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:18:51.438  /dev/nbd0
00:18:51.438    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:51.438  1+0 records in
00:18:51.438  1+0 records out
00:18:51.438  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798322 s, 5.1 MB/s
00:18:51.438    13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:18:51.438   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:18:51.696  /dev/nbd1
00:18:51.696    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:51.696  1+0 records in
00:18:51.696  1+0 records out
00:18:51.696  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004563 s, 9.0 MB/s
00:18:51.696    13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:51.696   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:18:51.696    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:51.696    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:51.696     13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:51.955    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:18:51.955    {
00:18:51.955      "nbd_device": "/dev/nbd0",
00:18:51.955      "bdev_name": "Nvme0n1p1"
00:18:51.955    },
00:18:51.955    {
00:18:51.955      "nbd_device": "/dev/nbd1",
00:18:51.955      "bdev_name": "Nvme0n1p2"
00:18:51.955    }
00:18:51.955  ]'
00:18:51.955     13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:51.955     13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:18:51.955    {
00:18:51.955      "nbd_device": "/dev/nbd0",
00:18:51.955      "bdev_name": "Nvme0n1p1"
00:18:51.955    },
00:18:51.955    {
00:18:51.955      "nbd_device": "/dev/nbd1",
00:18:51.955      "bdev_name": "Nvme0n1p2"
00:18:51.955    }
00:18:51.955  ]'
00:18:51.955    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:18:51.955  /dev/nbd1'
00:18:51.955     13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:18:51.955  /dev/nbd1'
00:18:51.955     13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:51.955    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2
00:18:51.955    13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:18:51.955  256+0 records in
00:18:51.955  256+0 records out
00:18:51.955  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00932682 s, 112 MB/s
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:51.955   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:18:52.214  256+0 records in
00:18:52.214  256+0 records out
00:18:52.214  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0700888 s, 15.0 MB/s
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:18:52.214  256+0 records in
00:18:52.214  256+0 records out
00:18:52.214  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0848266 s, 12.4 MB/s
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:52.214   13:55:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:52.478    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:52.478   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:18:52.737    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:52.737   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:52.737    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:52.737    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:52.737     13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:52.996    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:52.996     13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:52.996     13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:52.996    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:52.996     13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:52.996     13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:52.996     13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:52.996    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:52.996    13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:18:52.996   13:55:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:18:53.255  malloc_lvol_verify
00:18:53.255   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:18:53.513  2a07f084-b0a6-4266-aac5-fa5bc32dc1a7
00:18:53.513   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:18:53.772  c39778db-c173-479a-b540-ee60a86d8b5d
00:18:53.773   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:18:54.031  /dev/nbd0
00:18:54.031   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:18:54.031   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:18:54.031   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:18:54.031   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:18:54.031   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:18:54.031  mke2fs 1.47.0 (5-Feb-2023)
00:18:54.031  
00:18:54.031  Filesystem too small for a journal
00:18:54.031  Discarding device blocks:    0/1024         done                            
00:18:54.031  Creating filesystem with 1024 4k blocks and 1024 inodes
00:18:54.031  
00:18:54.031  Allocating group tables: 0/1   done                            
00:18:54.032  Writing inode tables: 0/1   done                            
00:18:54.032  Writing superblocks and filesystem accounting information: 0/1   done
00:18:54.032  
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:54.032   13:55:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:54.291    13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 80976
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 80976 ']'
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 80976
00:18:54.291    13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:54.291    13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80976
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80976'
00:18:54.291  killing process with pid 80976
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 80976
00:18:54.291   13:55:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 80976
00:18:56.197   13:55:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:18:56.197  
00:18:56.197  real	0m7.332s
00:18:56.197  user	0m10.165s
00:18:56.197  sys	0m2.084s
00:18:56.197   13:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:56.197  ************************************
00:18:56.197   13:55:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:56.197  END TEST bdev_nbd
00:18:56.197  ************************************
00:18:56.197   13:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:18:56.197   13:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']'
00:18:56.197  skipping fio tests on NVMe due to multi-ns failures.
00:18:56.197   13:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']'
00:18:56.197   13:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:18:56.197   13:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:56.197   13:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:18:56.197   13:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:18:56.197   13:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:56.197   13:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:18:56.197  ************************************
00:18:56.197  START TEST bdev_verify
00:18:56.197  ************************************
00:18:56.197   13:55:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:18:56.197  [2024-12-11 13:55:38.623599] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:18:56.197  [2024-12-11 13:55:38.623767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81220 ]
00:18:56.197  [2024-12-11 13:55:38.798569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:18:56.197  [2024-12-11 13:55:38.933579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:18:56.197  [2024-12-11 13:55:38.933579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:18:56.765  Running I/O for 5 seconds...
00:18:59.079      17408.00 IOPS,    68.00 MiB/s
[2024-12-11T13:55:42.789Z]     17600.00 IOPS,    68.75 MiB/s
[2024-12-11T13:55:43.725Z]     17536.00 IOPS,    68.50 MiB/s
[2024-12-11T13:55:44.662Z]     17536.00 IOPS,    68.50 MiB/s
00:19:01.890                                                                                                  Latency(us)
00:19:01.890  
[2024-12-11T13:55:44.662Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:01.890  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:01.890  	 Verification LBA range: start 0x0 length 0x4ff80
00:19:01.890  	 Nvme0n1p1           :       5.02    4358.05      17.02       0.00     0.00   29279.99    4556.31   33953.89
00:19:01.890  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:01.890  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:19:01.890  	 Nvme0n1p1           :       5.02    4355.91      17.02       0.00     0.00   29292.63    3994.58   37199.48
00:19:01.890  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:01.890  	 Verification LBA range: start 0x0 length 0x4ff7f
00:19:01.890  	 Nvme0n1p2           :       5.02    4356.70      17.02       0.00     0.00   29242.59    4181.82   26838.55
00:19:01.890  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:01.890  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:19:01.890  	 Nvme0n1p2           :       5.02    4357.43      17.02       0.00     0.00   29204.98    4462.69   25715.08
00:19:01.890  
[2024-12-11T13:55:44.662Z]  ===================================================================================================================
00:19:01.890  
[2024-12-11T13:55:44.662Z]  Total                       :              17428.09      68.08       0.00     0.00   29255.05    3994.58   37199.48
00:19:03.269  
00:19:03.269  real	0m7.388s
00:19:03.269  user	0m13.726s
00:19:03.269  sys	0m0.287s
00:19:03.270   13:55:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:03.270   13:55:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:19:03.270  ************************************
00:19:03.270  END TEST bdev_verify
00:19:03.270  ************************************
00:19:03.270   13:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:19:03.270   13:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:19:03.270   13:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:03.270   13:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:19:03.270  ************************************
00:19:03.270  START TEST bdev_verify_big_io
00:19:03.270  ************************************
00:19:03.270   13:55:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:19:03.529  [2024-12-11 13:55:46.087351] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:19:03.529  [2024-12-11 13:55:46.087552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81320 ]
00:19:03.529  [2024-12-11 13:55:46.284819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:19:03.802  [2024-12-11 13:55:46.420309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:19:03.802  [2024-12-11 13:55:46.420320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:19:04.370  Running I/O for 5 seconds...
00:19:06.685       1550.00 IOPS,    96.88 MiB/s
[2024-12-11T13:55:50.835Z]      1736.00 IOPS,   108.50 MiB/s
[2024-12-11T13:55:51.770Z]      1796.67 IOPS,   112.29 MiB/s
[2024-12-11T13:55:52.338Z]      1859.50 IOPS,   116.22 MiB/s
00:19:09.566                                                                                                  Latency(us)
00:19:09.566  
[2024-12-11T13:55:52.338Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:09.566  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:09.566  	 Verification LBA range: start 0x0 length 0x4ff8
00:19:09.566  	 Nvme0n1p1           :       5.16     446.41      27.90       0.00     0.00  282254.07   11047.50  291603.99
00:19:09.566  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:09.566  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:19:09.566  	 Nvme0n1p1           :       5.18     469.73      29.36       0.00     0.00  267681.92    5523.75  309579.58
00:19:09.566  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:09.566  	 Verification LBA range: start 0x0 length 0x4ff7
00:19:09.566  	 Nvme0n1p2           :       5.17     445.74      27.86       0.00     0.00  275711.82    4431.48  291603.99
00:19:09.566  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:09.566  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:19:09.566  	 Nvme0n1p2           :       5.18     469.50      29.34       0.00     0.00  260502.80    3151.97  315571.44
00:19:09.566  
[2024-12-11T13:55:52.338Z]  ===================================================================================================================
00:19:09.566  
[2024-12-11T13:55:52.338Z]  Total                       :               1831.38     114.46       0.00     0.00  271336.43    3151.97  315571.44
00:19:10.972  
00:19:10.972  real	0m7.546s
00:19:10.972  user	0m13.950s
00:19:10.972  sys	0m0.322s
00:19:10.972   13:55:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:10.972   13:55:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:19:10.972  ************************************
00:19:10.972  END TEST bdev_verify_big_io
00:19:10.972  ************************************
00:19:10.972   13:55:53 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:10.972   13:55:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:10.972   13:55:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:10.972   13:55:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:19:10.972  ************************************
00:19:10.972  START TEST bdev_write_zeroes
00:19:10.972  ************************************
00:19:10.972   13:55:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:10.972  [2024-12-11 13:55:53.693834] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:19:10.972  [2024-12-11 13:55:53.694067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81414 ]
00:19:11.230  [2024-12-11 13:55:53.894488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:11.488  [2024-12-11 13:55:54.028693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:19:12.055  Running I/O for 1 seconds...
00:19:12.994      48384.00 IOPS,   189.00 MiB/s
00:19:12.994                                                                                                  Latency(us)
00:19:12.994  
[2024-12-11T13:55:55.766Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:12.994  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:12.994  	 Nvme0n1p1           :       1.01   24176.85      94.44       0.00     0.00    5280.69    3557.67   17850.76
00:19:12.994  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:12.994  	 Nvme0n1p2           :       1.01   24129.27      94.25       0.00     0.00    5283.48    3635.69   18100.42
00:19:12.994  
[2024-12-11T13:55:55.766Z]  ===================================================================================================================
00:19:12.994  
[2024-12-11T13:55:55.766Z]  Total                       :              48306.12     188.70       0.00     0.00    5282.09    3557.67   18100.42
00:19:14.377  
00:19:14.377  real	0m3.391s
00:19:14.377  user	0m3.004s
00:19:14.377  sys	0m0.287s
00:19:14.377   13:55:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:14.377   13:55:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:19:14.377  ************************************
00:19:14.377  END TEST bdev_write_zeroes
00:19:14.377  ************************************
00:19:14.377   13:55:57 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:14.377   13:55:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:14.377   13:55:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:14.377   13:55:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:19:14.377  ************************************
00:19:14.377  START TEST bdev_json_nonenclosed
00:19:14.377  ************************************
00:19:14.377   13:55:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:14.377  [2024-12-11 13:55:57.148665] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:19:14.377  [2024-12-11 13:55:57.148857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81472 ]
00:19:14.636  [2024-12-11 13:55:57.344604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:14.895  [2024-12-11 13:55:57.478027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:19:14.895  [2024-12-11 13:55:57.478122] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:19:14.895  [2024-12-11 13:55:57.478148] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:14.895  [2024-12-11 13:55:57.478161] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:15.154  
00:19:15.154  real	0m0.682s
00:19:15.154  user	0m0.420s
00:19:15.154  sys	0m0.161s
00:19:15.154   13:55:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:15.154   13:55:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:19:15.154  ************************************
00:19:15.154  END TEST bdev_json_nonenclosed
00:19:15.154  ************************************
00:19:15.154   13:55:57 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:15.154   13:55:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:15.154   13:55:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:15.154   13:55:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:19:15.154  ************************************
00:19:15.154  START TEST bdev_json_nonarray
00:19:15.154  ************************************
00:19:15.154   13:55:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:15.154  [2024-12-11 13:55:57.898612] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:19:15.154  [2024-12-11 13:55:57.898884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81497 ]
00:19:15.413  [2024-12-11 13:55:58.095756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:15.672  [2024-12-11 13:55:58.230439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:19:15.672  [2024-12-11 13:55:58.230544] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:19:15.672  [2024-12-11 13:55:58.230572] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:15.672  [2024-12-11 13:55:58.230587] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:15.931  
00:19:15.931  real	0m0.691s
00:19:15.931  user	0m0.449s
00:19:15.931  sys	0m0.142s
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:19:15.931  ************************************
00:19:15.931  END TEST bdev_json_nonarray
00:19:15.931  ************************************
00:19:15.931   13:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]]
00:19:15.931   13:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]]
00:19:15.931   13:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:19:15.931   13:55:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:15.931   13:55:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:15.931   13:55:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:19:15.931  ************************************
00:19:15.931  START TEST bdev_gpt_uuid
00:19:15.931  ************************************
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81528
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 81528
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 81528 ']'
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:15.931  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:15.931   13:55:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:19:15.931  [2024-12-11 13:55:58.673948] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:19:15.932  [2024-12-11 13:55:58.674134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81528 ]
00:19:16.191  [2024-12-11 13:55:58.872112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:16.449  [2024-12-11 13:55:59.013574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:19:17.438   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:17.438   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:19:17.438   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:19:17.438   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:17.438   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:19:17.697  Some configs were skipped because the RPC state that can call them passed over.
00:19:17.697   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:17.697   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine
00:19:17.697   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:17.697   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:19:17.697   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[
00:19:17.698  {
00:19:17.698  "name": "Nvme0n1p1",
00:19:17.698  "aliases": [
00:19:17.698  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:19:17.698  ],
00:19:17.698  "product_name": "GPT Disk",
00:19:17.698  "block_size": 4096,
00:19:17.698  "num_blocks": 655104,
00:19:17.698  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:19:17.698  "assigned_rate_limits": {
00:19:17.698  "rw_ios_per_sec": 0,
00:19:17.698  "rw_mbytes_per_sec": 0,
00:19:17.698  "r_mbytes_per_sec": 0,
00:19:17.698  "w_mbytes_per_sec": 0
00:19:17.698  },
00:19:17.698  "claimed": false,
00:19:17.698  "zoned": false,
00:19:17.698  "supported_io_types": {
00:19:17.698  "read": true,
00:19:17.698  "write": true,
00:19:17.698  "unmap": true,
00:19:17.698  "flush": true,
00:19:17.698  "reset": true,
00:19:17.698  "nvme_admin": false,
00:19:17.698  "nvme_io": false,
00:19:17.698  "nvme_io_md": false,
00:19:17.698  "write_zeroes": true,
00:19:17.698  "zcopy": false,
00:19:17.698  "get_zone_info": false,
00:19:17.698  "zone_management": false,
00:19:17.698  "zone_append": false,
00:19:17.698  "compare": true,
00:19:17.698  "compare_and_write": false,
00:19:17.698  "abort": true,
00:19:17.698  "seek_hole": false,
00:19:17.698  "seek_data": false,
00:19:17.698  "copy": true,
00:19:17.698  "nvme_iov_md": false
00:19:17.698  },
00:19:17.698  "driver_specific": {
00:19:17.698  "gpt": {
00:19:17.698  "base_bdev": "Nvme0n1",
00:19:17.698  "offset_blocks": 256,
00:19:17.698  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:19:17.698  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:19:17.698  "partition_name": "SPDK_TEST_first"
00:19:17.698  }
00:19:17.698  }
00:19:17.698  }
00:19:17.698  ]'
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]]
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[
00:19:17.698  {
00:19:17.698  "name": "Nvme0n1p2",
00:19:17.698  "aliases": [
00:19:17.698  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:19:17.698  ],
00:19:17.698  "product_name": "GPT Disk",
00:19:17.698  "block_size": 4096,
00:19:17.698  "num_blocks": 655103,
00:19:17.698  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:19:17.698  "assigned_rate_limits": {
00:19:17.698  "rw_ios_per_sec": 0,
00:19:17.698  "rw_mbytes_per_sec": 0,
00:19:17.698  "r_mbytes_per_sec": 0,
00:19:17.698  "w_mbytes_per_sec": 0
00:19:17.698  },
00:19:17.698  "claimed": false,
00:19:17.698  "zoned": false,
00:19:17.698  "supported_io_types": {
00:19:17.698  "read": true,
00:19:17.698  "write": true,
00:19:17.698  "unmap": true,
00:19:17.698  "flush": true,
00:19:17.698  "reset": true,
00:19:17.698  "nvme_admin": false,
00:19:17.698  "nvme_io": false,
00:19:17.698  "nvme_io_md": false,
00:19:17.698  "write_zeroes": true,
00:19:17.698  "zcopy": false,
00:19:17.698  "get_zone_info": false,
00:19:17.698  "zone_management": false,
00:19:17.698  "zone_append": false,
00:19:17.698  "compare": true,
00:19:17.698  "compare_and_write": false,
00:19:17.698  "abort": true,
00:19:17.698  "seek_hole": false,
00:19:17.698  "seek_data": false,
00:19:17.698  "copy": true,
00:19:17.698  "nvme_iov_md": false
00:19:17.698  },
00:19:17.698  "driver_specific": {
00:19:17.698  "gpt": {
00:19:17.698  "base_bdev": "Nvme0n1",
00:19:17.698  "offset_blocks": 655360,
00:19:17.698  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:19:17.698  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:19:17.698  "partition_name": "SPDK_TEST_second"
00:19:17.698  }
00:19:17.698  }
00:19:17.698  }
00:19:17.698  ]'
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]]
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 81528
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 81528 ']'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 81528
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:17.698    13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81528
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:17.698  killing process with pid 81528
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81528'
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 81528
00:19:17.698   13:56:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 81528
00:19:20.230  
00:19:20.230  real	0m4.410s
00:19:20.230  user	0m4.318s
00:19:20.230  sys	0m0.631s
00:19:20.230   13:56:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:20.230   13:56:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:19:20.230  ************************************
00:19:20.230  END TEST bdev_gpt_uuid
00:19:20.230  ************************************
00:19:20.488   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]]
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:19:20.489   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:19:20.747  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:19:20.747  Waiting for block devices as requested
00:19:20.747  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:19:21.005   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:19:21.005   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:19:21.264  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:19:21.264  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:19:21.264  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:19:21.264  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:19:21.264   13:56:03 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:19:21.264  
00:19:21.264  real	0m46.519s
00:19:21.264  user	1m3.902s
00:19:21.264  sys	0m7.925s
00:19:21.264   13:56:03 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:21.264   13:56:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:19:21.264  ************************************
00:19:21.264  END TEST blockdev_nvme_gpt
00:19:21.264  ************************************
00:19:21.264   13:56:03  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:19:21.264   13:56:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:21.264   13:56:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:21.264   13:56:03  -- common/autotest_common.sh@10 -- # set +x
00:19:21.264  ************************************
00:19:21.264  START TEST nvme
00:19:21.264  ************************************
00:19:21.264   13:56:03 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:19:21.264  * Looking for test storage...
00:19:21.523  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:21.523     13:56:04 nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:19:21.523     13:56:04 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:21.523    13:56:04 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:21.523    13:56:04 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:21.523    13:56:04 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:21.523    13:56:04 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:19:21.523    13:56:04 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:19:21.523    13:56:04 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:19:21.523    13:56:04 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:19:21.523    13:56:04 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:19:21.523    13:56:04 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:19:21.523    13:56:04 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:19:21.523    13:56:04 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:21.523    13:56:04 nvme -- scripts/common.sh@344 -- # case "$op" in
00:19:21.523    13:56:04 nvme -- scripts/common.sh@345 -- # : 1
00:19:21.523    13:56:04 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:21.523    13:56:04 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:21.523     13:56:04 nvme -- scripts/common.sh@365 -- # decimal 1
00:19:21.523     13:56:04 nvme -- scripts/common.sh@353 -- # local d=1
00:19:21.523     13:56:04 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:21.523     13:56:04 nvme -- scripts/common.sh@355 -- # echo 1
00:19:21.523    13:56:04 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:19:21.523     13:56:04 nvme -- scripts/common.sh@366 -- # decimal 2
00:19:21.523     13:56:04 nvme -- scripts/common.sh@353 -- # local d=2
00:19:21.523     13:56:04 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:21.523     13:56:04 nvme -- scripts/common.sh@355 -- # echo 2
00:19:21.523    13:56:04 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:19:21.523    13:56:04 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:21.523    13:56:04 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:21.523    13:56:04 nvme -- scripts/common.sh@368 -- # return 0
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:21.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.523  		--rc genhtml_branch_coverage=1
00:19:21.523  		--rc genhtml_function_coverage=1
00:19:21.523  		--rc genhtml_legend=1
00:19:21.523  		--rc geninfo_all_blocks=1
00:19:21.523  		--rc geninfo_unexecuted_blocks=1
00:19:21.523  		
00:19:21.523  		'
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:21.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.523  		--rc genhtml_branch_coverage=1
00:19:21.523  		--rc genhtml_function_coverage=1
00:19:21.523  		--rc genhtml_legend=1
00:19:21.523  		--rc geninfo_all_blocks=1
00:19:21.523  		--rc geninfo_unexecuted_blocks=1
00:19:21.523  		
00:19:21.523  		'
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:21.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.523  		--rc genhtml_branch_coverage=1
00:19:21.523  		--rc genhtml_function_coverage=1
00:19:21.523  		--rc genhtml_legend=1
00:19:21.523  		--rc geninfo_all_blocks=1
00:19:21.523  		--rc geninfo_unexecuted_blocks=1
00:19:21.523  		
00:19:21.523  		'
00:19:21.523    13:56:04 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:21.523  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:21.523  		--rc genhtml_branch_coverage=1
00:19:21.523  		--rc genhtml_function_coverage=1
00:19:21.523  		--rc genhtml_legend=1
00:19:21.523  		--rc geninfo_all_blocks=1
00:19:21.523  		--rc geninfo_unexecuted_blocks=1
00:19:21.523  		
00:19:21.523  		'
00:19:21.523   13:56:04 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:19:22.090  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:19:22.090  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:19:23.038    13:56:05 nvme -- nvme/nvme.sh@79 -- # uname
00:19:23.038   13:56:05 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:19:23.038   13:56:05 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:19:23.038   13:56:05 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1075 -- # stubpid=81923
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:19:23.038  Waiting for stub to ready for secondary processes...
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/81923 ]]
00:19:23.038   13:56:05 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:19:23.038  [2024-12-11 13:56:05.663286] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:19:23.038  [2024-12-11 13:56:05.663416] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:19:23.974   13:56:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:19:23.974   13:56:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/81923 ]]
00:19:23.974   13:56:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:19:23.974  [2024-12-11 13:56:06.720483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:19:24.232  [2024-12-11 13:56:06.879993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:19:24.232  [2024-12-11 13:56:06.880147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:19:24.232  [2024-12-11 13:56:06.880192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:19:24.232  [2024-12-11 13:56:06.890707] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:19:24.232  [2024-12-11 13:56:06.890772] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:19:24.232  [2024-12-11 13:56:06.902686] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:19:24.232  [2024-12-11 13:56:06.902822] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:19:25.168   13:56:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:19:25.168   13:56:07 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:19:25.168  done.
00:19:25.168   13:56:07 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:19:25.168   13:56:07 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:19:25.168   13:56:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:25.168   13:56:07 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:25.168  ************************************
00:19:25.168  START TEST nvme_reset
00:19:25.168  ************************************
00:19:25.168   13:56:07 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:19:25.426  Initializing NVMe Controllers
00:19:25.426  Skipping QEMU NVMe SSD at 0000:00:10.0
00:19:25.426  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:19:25.426  
00:19:25.426  real	0m0.339s
00:19:25.426  user	0m0.122s
00:19:25.426  sys	0m0.172s
00:19:25.426   13:56:07 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:25.426   13:56:07 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:19:25.426  ************************************
00:19:25.426  END TEST nvme_reset
00:19:25.426  ************************************
00:19:25.426   13:56:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:19:25.426   13:56:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:25.426   13:56:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:25.426   13:56:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:25.426  ************************************
00:19:25.426  START TEST nvme_identify
00:19:25.426  ************************************
00:19:25.426   13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:19:25.426   13:56:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:19:25.426   13:56:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:19:25.426   13:56:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:19:25.426    13:56:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:19:25.426    13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:19:25.426    13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:19:25.426    13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:19:25.426     13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:19:25.426     13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:19:25.426    13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:19:25.427    13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:19:25.427   13:56:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:19:25.685  [2024-12-11 13:56:08.434694] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 81954 terminated unexpected
00:19:25.685  =====================================================
00:19:25.685  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:25.685  =====================================================
00:19:25.685  Controller Capabilities/Features
00:19:25.685  ================================
00:19:25.685  Vendor ID:                             1b36
00:19:25.685  Subsystem Vendor ID:                   1af4
00:19:25.685  Serial Number:                         12340
00:19:25.685  Model Number:                          QEMU NVMe Ctrl
00:19:25.685  Firmware Version:                      8.0.0
00:19:25.685  Recommended Arb Burst:                 6
00:19:25.685  IEEE OUI Identifier:                   00 54 52
00:19:25.685  Multi-path I/O
00:19:25.685    May have multiple subsystem ports:   No
00:19:25.685    May have multiple controllers:       No
00:19:25.685    Associated with SR-IOV VF:           No
00:19:25.685  Max Data Transfer Size:                524288
00:19:25.685  Max Number of Namespaces:              256
00:19:25.685  Max Number of I/O Queues:              64
00:19:25.685  NVMe Specification Version (VS):       1.4
00:19:25.685  NVMe Specification Version (Identify): 1.4
00:19:25.685  Maximum Queue Entries:                 2048
00:19:25.685  Contiguous Queues Required:            Yes
00:19:25.685  Arbitration Mechanisms Supported
00:19:25.685    Weighted Round Robin:                Not Supported
00:19:25.685    Vendor Specific:                     Not Supported
00:19:25.685  Reset Timeout:                         7500 ms
00:19:25.685  Doorbell Stride:                       4 bytes
00:19:25.685  NVM Subsystem Reset:                   Not Supported
00:19:25.685  Command Sets Supported
00:19:25.685    NVM Command Set:                     Supported
00:19:25.685  Boot Partition:                        Not Supported
00:19:25.685  Memory Page Size Minimum:              4096 bytes
00:19:25.685  Memory Page Size Maximum:              65536 bytes
00:19:25.685  Persistent Memory Region:              Not Supported
00:19:25.685  Optional Asynchronous Events Supported
00:19:25.685    Namespace Attribute Notices:         Supported
00:19:25.685    Firmware Activation Notices:         Not Supported
00:19:25.685    ANA Change Notices:                  Not Supported
00:19:25.685    PLE Aggregate Log Change Notices:    Not Supported
00:19:25.685    LBA Status Info Alert Notices:       Not Supported
00:19:25.685    EGE Aggregate Log Change Notices:    Not Supported
00:19:25.685    Normal NVM Subsystem Shutdown event: Not Supported
00:19:25.685    Zone Descriptor Change Notices:      Not Supported
00:19:25.685    Discovery Log Change Notices:        Not Supported
00:19:25.685  Controller Attributes
00:19:25.685    128-bit Host Identifier:             Not Supported
00:19:25.685    Non-Operational Permissive Mode:     Not Supported
00:19:25.685    NVM Sets:                            Not Supported
00:19:25.685    Read Recovery Levels:                Not Supported
00:19:25.685    Endurance Groups:                    Not Supported
00:19:25.685    Predictable Latency Mode:            Not Supported
00:19:25.685    Traffic Based Keep ALive:            Not Supported
00:19:25.685    Namespace Granularity:               Not Supported
00:19:25.685    SQ Associations:                     Not Supported
00:19:25.686    UUID List:                           Not Supported
00:19:25.686    Multi-Domain Subsystem:              Not Supported
00:19:25.686    Fixed Capacity Management:           Not Supported
00:19:25.686    Variable Capacity Management:        Not Supported
00:19:25.686    Delete Endurance Group:              Not Supported
00:19:25.686    Delete NVM Set:                      Not Supported
00:19:25.686    Extended LBA Formats Supported:      Supported
00:19:25.686    Flexible Data Placement Supported:   Not Supported
00:19:25.686  
00:19:25.686  Controller Memory Buffer Support
00:19:25.686  ================================
00:19:25.686  Supported:                             No
00:19:25.686  
00:19:25.686  Persistent Memory Region Support
00:19:25.686  ================================
00:19:25.686  Supported:                             No
00:19:25.686  
00:19:25.686  Admin Command Set Attributes
00:19:25.686  ============================
00:19:25.686  Security Send/Receive:                 Not Supported
00:19:25.686  Format NVM:                            Supported
00:19:25.686  Firmware Activate/Download:            Not Supported
00:19:25.686  Namespace Management:                  Supported
00:19:25.686  Device Self-Test:                      Not Supported
00:19:25.686  Directives:                            Supported
00:19:25.686  NVMe-MI:                               Not Supported
00:19:25.686  Virtualization Management:             Not Supported
00:19:25.686  Doorbell Buffer Config:                Supported
00:19:25.686  Get LBA Status Capability:             Not Supported
00:19:25.686  Command & Feature Lockdown Capability: Not Supported
00:19:25.686  Abort Command Limit:                   4
00:19:25.686  Async Event Request Limit:             4
00:19:25.686  Number of Firmware Slots:              N/A
00:19:25.686  Firmware Slot 1 Read-Only:             N/A
00:19:25.686  Firmware Activation Without Reset:     N/A
00:19:25.686  Multiple Update Detection Support:     N/A
00:19:25.686  Firmware Update Granularity:           No Information Provided
00:19:25.686  Per-Namespace SMART Log:               Yes
00:19:25.686  Asymmetric Namespace Access Log Page:  Not Supported
00:19:25.686  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:19:25.686  Command Effects Log Page:              Supported
00:19:25.686  Get Log Page Extended Data:            Supported
00:19:25.686  Telemetry Log Pages:                   Not Supported
00:19:25.686  Persistent Event Log Pages:            Not Supported
00:19:25.686  Supported Log Pages Log Page:          May Support
00:19:25.686  Commands Supported & Effects Log Page: Not Supported
00:19:25.686  Feature Identifiers & Effects Log Page:May Support
00:19:25.686  NVMe-MI Commands & Effects Log Page:   May Support
00:19:25.686  Data Area 4 for Telemetry Log:         Not Supported
00:19:25.686  Error Log Page Entries Supported:      1
00:19:25.686  Keep Alive:                            Not Supported
00:19:25.686  
00:19:25.686  NVM Command Set Attributes
00:19:25.686  ==========================
00:19:25.686  Submission Queue Entry Size
00:19:25.686    Max:                       64
00:19:25.686    Min:                       64
00:19:25.686  Completion Queue Entry Size
00:19:25.686    Max:                       16
00:19:25.686    Min:                       16
00:19:25.686  Number of Namespaces:        256
00:19:25.686  Compare Command:             Supported
00:19:25.686  Write Uncorrectable Command: Not Supported
00:19:25.686  Dataset Management Command:  Supported
00:19:25.686  Write Zeroes Command:        Supported
00:19:25.686  Set Features Save Field:     Supported
00:19:25.686  Reservations:                Not Supported
00:19:25.686  Timestamp:                   Supported
00:19:25.686  Copy:                        Supported
00:19:25.686  Volatile Write Cache:        Present
00:19:25.686  Atomic Write Unit (Normal):  1
00:19:25.686  Atomic Write Unit (PFail):   1
00:19:25.686  Atomic Compare & Write Unit: 1
00:19:25.686  Fused Compare & Write:       Not Supported
00:19:25.686  Scatter-Gather List
00:19:25.686    SGL Command Set:           Supported
00:19:25.686    SGL Keyed:                 Not Supported
00:19:25.686    SGL Bit Bucket Descriptor: Not Supported
00:19:25.686    SGL Metadata Pointer:      Not Supported
00:19:25.686    Oversized SGL:             Not Supported
00:19:25.686    SGL Metadata Address:      Not Supported
00:19:25.686    SGL Offset:                Not Supported
00:19:25.686    Transport SGL Data Block:  Not Supported
00:19:25.686  Replay Protected Memory Block:  Not Supported
00:19:25.686  
00:19:25.686  Firmware Slot Information
00:19:25.686  =========================
00:19:25.686  Active slot:                 1
00:19:25.686  Slot 1 Firmware Revision:    1.0
00:19:25.686  
00:19:25.686  
00:19:25.686  Commands Supported and Effects
00:19:25.686  ==============================
00:19:25.686  Admin Commands
00:19:25.686  --------------
00:19:25.686     Delete I/O Submission Queue (00h): Supported 
00:19:25.686     Create I/O Submission Queue (01h): Supported 
00:19:25.686                    Get Log Page (02h): Supported 
00:19:25.686     Delete I/O Completion Queue (04h): Supported 
00:19:25.686     Create I/O Completion Queue (05h): Supported 
00:19:25.686                        Identify (06h): Supported 
00:19:25.686                           Abort (08h): Supported 
00:19:25.686                    Set Features (09h): Supported 
00:19:25.686                    Get Features (0Ah): Supported 
00:19:25.686      Asynchronous Event Request (0Ch): Supported 
00:19:25.686            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:19:25.686                  Directive Send (19h): Supported 
00:19:25.686               Directive Receive (1Ah): Supported 
00:19:25.686       Virtualization Management (1Ch): Supported 
00:19:25.686          Doorbell Buffer Config (7Ch): Supported 
00:19:25.686                      Format NVM (80h): Supported LBA-Change 
00:19:25.686  I/O Commands
00:19:25.686  ------------
00:19:25.686                           Flush (00h): Supported LBA-Change 
00:19:25.686                           Write (01h): Supported LBA-Change 
00:19:25.686                            Read (02h): Supported 
00:19:25.686                         Compare (05h): Supported 
00:19:25.686                    Write Zeroes (08h): Supported LBA-Change 
00:19:25.686              Dataset Management (09h): Supported LBA-Change 
00:19:25.686                         Unknown (0Ch): Supported 
00:19:25.686                         Unknown (12h): Supported 
00:19:25.686                            Copy (19h): Supported LBA-Change 
00:19:25.686                         Unknown (1Dh): Supported LBA-Change 
00:19:25.686  
00:19:25.686  Error Log
00:19:25.686  =========
00:19:25.686  
00:19:25.686  Arbitration
00:19:25.686  ===========
00:19:25.686  Arbitration Burst:           no limit
00:19:25.686  
00:19:25.686  Power Management
00:19:25.686  ================
00:19:25.686  Number of Power States:          1
00:19:25.686  Current Power State:             Power State #0
00:19:25.686  Power State #0:
00:19:25.686    Max Power:                     25.00 W
00:19:25.686    Non-Operational State:         Operational
00:19:25.686    Entry Latency:                 16 microseconds
00:19:25.686    Exit Latency:                  4 microseconds
00:19:25.686    Relative Read Throughput:      0
00:19:25.686    Relative Read Latency:         0
00:19:25.686    Relative Write Throughput:     0
00:19:25.686    Relative Write Latency:        0
00:19:25.945    Idle Power:                     Not Reported
00:19:25.945    Active Power:                   Not Reported
00:19:25.945  Non-Operational Permissive Mode: Not Supported
00:19:25.945  
00:19:25.945  Health Information
00:19:25.945  ==================
00:19:25.945  Critical Warnings:
00:19:25.945    Available Spare Space:     OK
00:19:25.945    Temperature:               OK
00:19:25.945    Device Reliability:        OK
00:19:25.945    Read Only:                 No
00:19:25.945    Volatile Memory Backup:    OK
00:19:25.945  Current Temperature:         323 Kelvin (50 Celsius)
00:19:25.945  Temperature Threshold:       343 Kelvin (70 Celsius)
00:19:25.945  Available Spare:             0%
00:19:25.945  Available Spare Threshold:   0%
00:19:25.945  Life Percentage Used:        0%
00:19:25.945  Data Units Read:             4265
00:19:25.945  Data Units Written:          3997
00:19:25.945  Host Read Commands:          205219
00:19:25.945  Host Write Commands:         219712
00:19:25.946  Controller Busy Time:        0 minutes
00:19:25.946  Power Cycles:                0
00:19:25.946  Power On Hours:              0 hours
00:19:25.946  Unsafe Shutdowns:            0
00:19:25.946  Unrecoverable Media Errors:  0
00:19:25.946  Lifetime Error Log Entries:  0
00:19:25.946  Warning Temperature Time:    0 minutes
00:19:25.946  Critical Temperature Time:   0 minutes
00:19:25.946  
00:19:25.946  Number of Queues
00:19:25.946  ================
00:19:25.946  Number of I/O Submission Queues:      64
00:19:25.946  Number of I/O Completion Queues:      64
00:19:25.946  
00:19:25.946  ZNS Specific Controller Data
00:19:25.946  ============================
00:19:25.946  Zone Append Size Limit:      0
00:19:25.946  
00:19:25.946  
00:19:25.946  Active Namespaces
00:19:25.946  =================
00:19:25.946  Namespace ID:1
00:19:25.946  Error Recovery Timeout:                Unlimited
00:19:25.946  Command Set Identifier:                NVM (00h)
00:19:25.946  Deallocate:                            Supported
00:19:25.946  Deallocated/Unwritten Error:           Supported
00:19:25.946  Deallocated Read Value:                All 0x00
00:19:25.946  Deallocate in Write Zeroes:            Not Supported
00:19:25.946  Deallocated Guard Field:               0xFFFF
00:19:25.946  Flush:                                 Supported
00:19:25.946  Reservation:                           Not Supported
00:19:25.946  Namespace Sharing Capabilities:        Private
00:19:25.946  Size (in LBAs):                        1310720 (5GiB)
00:19:25.946  Capacity (in LBAs):                    1310720 (5GiB)
00:19:25.946  Utilization (in LBAs):                 1310720 (5GiB)
00:19:25.946  Thin Provisioning:                     Not Supported
00:19:25.946  Per-NS Atomic Units:                   No
00:19:25.946  Maximum Single Source Range Length:    128
00:19:25.946  Maximum Copy Length:                   128
00:19:25.946  Maximum Source Range Count:            128
00:19:25.946  NGUID/EUI64 Never Reused:              No
00:19:25.946  Namespace Write Protected:             No
00:19:25.946  Number of LBA Formats:                 8
00:19:25.946  Current LBA Format:                    LBA Format #04
00:19:25.946  LBA Format #00: Data Size:   512  Metadata Size:     0
00:19:25.946  LBA Format #01: Data Size:   512  Metadata Size:     8
00:19:25.946  LBA Format #02: Data Size:   512  Metadata Size:    16
00:19:25.946  LBA Format #03: Data Size:   512  Metadata Size:    64
00:19:25.946  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:19:25.946  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:19:25.946  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:19:25.946  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:19:25.946  
00:19:25.946  NVM Specific Namespace Data
00:19:25.946  ===========================
00:19:25.946  Logical Block Storage Tag Mask:               0
00:19:25.946  Protection Information Capabilities:
00:19:25.946    16b Guard Protection Information Storage Tag Support:  No
00:19:25.946    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:19:25.946    Storage Tag Check Read Support:                        No
00:19:25.946  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:25.946   13:56:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:19:25.946   13:56:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:19:26.205  =====================================================
00:19:26.205  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:26.206  =====================================================
00:19:26.206  Controller Capabilities/Features
00:19:26.206  ================================
00:19:26.206  Vendor ID:                             1b36
00:19:26.206  Subsystem Vendor ID:                   1af4
00:19:26.206  Serial Number:                         12340
00:19:26.206  Model Number:                          QEMU NVMe Ctrl
00:19:26.206  Firmware Version:                      8.0.0
00:19:26.206  Recommended Arb Burst:                 6
00:19:26.206  IEEE OUI Identifier:                   00 54 52
00:19:26.206  Multi-path I/O
00:19:26.206    May have multiple subsystem ports:   No
00:19:26.206    May have multiple controllers:       No
00:19:26.206    Associated with SR-IOV VF:           No
00:19:26.206  Max Data Transfer Size:                524288
00:19:26.206  Max Number of Namespaces:              256
00:19:26.206  Max Number of I/O Queues:              64
00:19:26.206  NVMe Specification Version (VS):       1.4
00:19:26.206  NVMe Specification Version (Identify): 1.4
00:19:26.206  Maximum Queue Entries:                 2048
00:19:26.206  Contiguous Queues Required:            Yes
00:19:26.206  Arbitration Mechanisms Supported
00:19:26.206    Weighted Round Robin:                Not Supported
00:19:26.206    Vendor Specific:                     Not Supported
00:19:26.206  Reset Timeout:                         7500 ms
00:19:26.206  Doorbell Stride:                       4 bytes
00:19:26.206  NVM Subsystem Reset:                   Not Supported
00:19:26.206  Command Sets Supported
00:19:26.206    NVM Command Set:                     Supported
00:19:26.206  Boot Partition:                        Not Supported
00:19:26.206  Memory Page Size Minimum:              4096 bytes
00:19:26.206  Memory Page Size Maximum:              65536 bytes
00:19:26.206  Persistent Memory Region:              Not Supported
00:19:26.206  Optional Asynchronous Events Supported
00:19:26.206    Namespace Attribute Notices:         Supported
00:19:26.206    Firmware Activation Notices:         Not Supported
00:19:26.206    ANA Change Notices:                  Not Supported
00:19:26.206    PLE Aggregate Log Change Notices:    Not Supported
00:19:26.206    LBA Status Info Alert Notices:       Not Supported
00:19:26.206    EGE Aggregate Log Change Notices:    Not Supported
00:19:26.206    Normal NVM Subsystem Shutdown event: Not Supported
00:19:26.206    Zone Descriptor Change Notices:      Not Supported
00:19:26.206    Discovery Log Change Notices:        Not Supported
00:19:26.206  Controller Attributes
00:19:26.206    128-bit Host Identifier:             Not Supported
00:19:26.206    Non-Operational Permissive Mode:     Not Supported
00:19:26.206    NVM Sets:                            Not Supported
00:19:26.206    Read Recovery Levels:                Not Supported
00:19:26.206    Endurance Groups:                    Not Supported
00:19:26.206    Predictable Latency Mode:            Not Supported
00:19:26.206    Traffic Based Keep ALive:            Not Supported
00:19:26.206    Namespace Granularity:               Not Supported
00:19:26.206    SQ Associations:                     Not Supported
00:19:26.206    UUID List:                           Not Supported
00:19:26.206    Multi-Domain Subsystem:              Not Supported
00:19:26.206    Fixed Capacity Management:           Not Supported
00:19:26.206    Variable Capacity Management:        Not Supported
00:19:26.206    Delete Endurance Group:              Not Supported
00:19:26.206    Delete NVM Set:                      Not Supported
00:19:26.206    Extended LBA Formats Supported:      Supported
00:19:26.206    Flexible Data Placement Supported:   Not Supported
00:19:26.206  
00:19:26.206  Controller Memory Buffer Support
00:19:26.206  ================================
00:19:26.206  Supported:                             No
00:19:26.206  
00:19:26.206  Persistent Memory Region Support
00:19:26.206  ================================
00:19:26.206  Supported:                             No
00:19:26.206  
00:19:26.206  Admin Command Set Attributes
00:19:26.206  ============================
00:19:26.206  Security Send/Receive:                 Not Supported
00:19:26.206  Format NVM:                            Supported
00:19:26.206  Firmware Activate/Download:            Not Supported
00:19:26.206  Namespace Management:                  Supported
00:19:26.206  Device Self-Test:                      Not Supported
00:19:26.206  Directives:                            Supported
00:19:26.206  NVMe-MI:                               Not Supported
00:19:26.206  Virtualization Management:             Not Supported
00:19:26.206  Doorbell Buffer Config:                Supported
00:19:26.206  Get LBA Status Capability:             Not Supported
00:19:26.206  Command & Feature Lockdown Capability: Not Supported
00:19:26.206  Abort Command Limit:                   4
00:19:26.206  Async Event Request Limit:             4
00:19:26.206  Number of Firmware Slots:              N/A
00:19:26.206  Firmware Slot 1 Read-Only:             N/A
00:19:26.206  Firmware Activation Without Reset:     N/A
00:19:26.206  Multiple Update Detection Support:     N/A
00:19:26.206  Firmware Update Granularity:           No Information Provided
00:19:26.206  Per-Namespace SMART Log:               Yes
00:19:26.206  Asymmetric Namespace Access Log Page:  Not Supported
00:19:26.206  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:19:26.206  Command Effects Log Page:              Supported
00:19:26.206  Get Log Page Extended Data:            Supported
00:19:26.206  Telemetry Log Pages:                   Not Supported
00:19:26.206  Persistent Event Log Pages:            Not Supported
00:19:26.206  Supported Log Pages Log Page:          May Support
00:19:26.206  Commands Supported & Effects Log Page: Not Supported
00:19:26.206  Feature Identifiers & Effects Log Page:May Support
00:19:26.206  NVMe-MI Commands & Effects Log Page:   May Support
00:19:26.206  Data Area 4 for Telemetry Log:         Not Supported
00:19:26.206  Error Log Page Entries Supported:      1
00:19:26.206  Keep Alive:                            Not Supported
00:19:26.206  
00:19:26.206  NVM Command Set Attributes
00:19:26.206  ==========================
00:19:26.206  Submission Queue Entry Size
00:19:26.206    Max:                       64
00:19:26.206    Min:                       64
00:19:26.206  Completion Queue Entry Size
00:19:26.206    Max:                       16
00:19:26.206    Min:                       16
00:19:26.206  Number of Namespaces:        256
00:19:26.206  Compare Command:             Supported
00:19:26.206  Write Uncorrectable Command: Not Supported
00:19:26.206  Dataset Management Command:  Supported
00:19:26.206  Write Zeroes Command:        Supported
00:19:26.206  Set Features Save Field:     Supported
00:19:26.206  Reservations:                Not Supported
00:19:26.206  Timestamp:                   Supported
00:19:26.206  Copy:                        Supported
00:19:26.206  Volatile Write Cache:        Present
00:19:26.206  Atomic Write Unit (Normal):  1
00:19:26.206  Atomic Write Unit (PFail):   1
00:19:26.206  Atomic Compare & Write Unit: 1
00:19:26.206  Fused Compare & Write:       Not Supported
00:19:26.206  Scatter-Gather List
00:19:26.206    SGL Command Set:           Supported
00:19:26.206    SGL Keyed:                 Not Supported
00:19:26.206    SGL Bit Bucket Descriptor: Not Supported
00:19:26.206    SGL Metadata Pointer:      Not Supported
00:19:26.206    Oversized SGL:             Not Supported
00:19:26.206    SGL Metadata Address:      Not Supported
00:19:26.206    SGL Offset:                Not Supported
00:19:26.206    Transport SGL Data Block:  Not Supported
00:19:26.206  Replay Protected Memory Block:  Not Supported
00:19:26.206  
00:19:26.206  Firmware Slot Information
00:19:26.206  =========================
00:19:26.206  Active slot:                 1
00:19:26.206  Slot 1 Firmware Revision:    1.0
00:19:26.206  
00:19:26.206  
00:19:26.206  Commands Supported and Effects
00:19:26.206  ==============================
00:19:26.206  Admin Commands
00:19:26.206  --------------
00:19:26.206     Delete I/O Submission Queue (00h): Supported 
00:19:26.206     Create I/O Submission Queue (01h): Supported 
00:19:26.206                    Get Log Page (02h): Supported 
00:19:26.206     Delete I/O Completion Queue (04h): Supported 
00:19:26.206     Create I/O Completion Queue (05h): Supported 
00:19:26.206                        Identify (06h): Supported 
00:19:26.206                           Abort (08h): Supported 
00:19:26.206                    Set Features (09h): Supported 
00:19:26.206                    Get Features (0Ah): Supported 
00:19:26.206      Asynchronous Event Request (0Ch): Supported 
00:19:26.206            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:19:26.206                  Directive Send (19h): Supported 
00:19:26.206               Directive Receive (1Ah): Supported 
00:19:26.206       Virtualization Management (1Ch): Supported 
00:19:26.206          Doorbell Buffer Config (7Ch): Supported 
00:19:26.206                      Format NVM (80h): Supported LBA-Change 
00:19:26.206  I/O Commands
00:19:26.206  ------------
00:19:26.206                           Flush (00h): Supported LBA-Change 
00:19:26.206                           Write (01h): Supported LBA-Change 
00:19:26.206                            Read (02h): Supported 
00:19:26.206                         Compare (05h): Supported 
00:19:26.206                    Write Zeroes (08h): Supported LBA-Change 
00:19:26.206              Dataset Management (09h): Supported LBA-Change 
00:19:26.206                         Unknown (0Ch): Supported 
00:19:26.206                         Unknown (12h): Supported 
00:19:26.206                            Copy (19h): Supported LBA-Change 
00:19:26.206                         Unknown (1Dh): Supported LBA-Change 
00:19:26.206  
00:19:26.206  Error Log
00:19:26.206  =========
00:19:26.206  
00:19:26.206  Arbitration
00:19:26.206  ===========
00:19:26.206  Arbitration Burst:           no limit
00:19:26.206  
00:19:26.206  Power Management
00:19:26.206  ================
00:19:26.206  Number of Power States:          1
00:19:26.206  Current Power State:             Power State #0
00:19:26.206  Power State #0:
00:19:26.206    Max Power:                     25.00 W
00:19:26.206    Non-Operational State:         Operational
00:19:26.206    Entry Latency:                 16 microseconds
00:19:26.206    Exit Latency:                  4 microseconds
00:19:26.206    Relative Read Throughput:      0
00:19:26.206    Relative Read Latency:         0
00:19:26.206    Relative Write Throughput:     0
00:19:26.206    Relative Write Latency:        0
00:19:26.206    Idle Power:                     Not Reported
00:19:26.206    Active Power:                   Not Reported
00:19:26.206  Non-Operational Permissive Mode: Not Supported
00:19:26.206  
00:19:26.206  Health Information
00:19:26.206  ==================
00:19:26.206  Critical Warnings:
00:19:26.206    Available Spare Space:     OK
00:19:26.206    Temperature:               OK
00:19:26.207    Device Reliability:        OK
00:19:26.207    Read Only:                 No
00:19:26.207    Volatile Memory Backup:    OK
00:19:26.207  Current Temperature:         323 Kelvin (50 Celsius)
00:19:26.207  Temperature Threshold:       343 Kelvin (70 Celsius)
00:19:26.207  Available Spare:             0%
00:19:26.207  Available Spare Threshold:   0%
00:19:26.207  Life Percentage Used:        0%
00:19:26.207  Data Units Read:             4265
00:19:26.207  Data Units Written:          3997
00:19:26.207  Host Read Commands:          205219
00:19:26.207  Host Write Commands:         219712
00:19:26.207  Controller Busy Time:        0 minutes
00:19:26.207  Power Cycles:                0
00:19:26.207  Power On Hours:              0 hours
00:19:26.207  Unsafe Shutdowns:            0
00:19:26.207  Unrecoverable Media Errors:  0
00:19:26.207  Lifetime Error Log Entries:  0
00:19:26.207  Warning Temperature Time:    0 minutes
00:19:26.207  Critical Temperature Time:   0 minutes
00:19:26.207  
00:19:26.207  Number of Queues
00:19:26.207  ================
00:19:26.207  Number of I/O Submission Queues:      64
00:19:26.207  Number of I/O Completion Queues:      64
00:19:26.207  
00:19:26.207  ZNS Specific Controller Data
00:19:26.207  ============================
00:19:26.207  Zone Append Size Limit:      0
00:19:26.207  
00:19:26.207  
00:19:26.207  Active Namespaces
00:19:26.207  =================
00:19:26.207  Namespace ID:1
00:19:26.207  Error Recovery Timeout:                Unlimited
00:19:26.207  Command Set Identifier:                NVM (00h)
00:19:26.207  Deallocate:                            Supported
00:19:26.207  Deallocated/Unwritten Error:           Supported
00:19:26.207  Deallocated Read Value:                All 0x00
00:19:26.207  Deallocate in Write Zeroes:            Not Supported
00:19:26.207  Deallocated Guard Field:               0xFFFF
00:19:26.207  Flush:                                 Supported
00:19:26.207  Reservation:                           Not Supported
00:19:26.207  Namespace Sharing Capabilities:        Private
00:19:26.207  Size (in LBAs):                        1310720 (5GiB)
00:19:26.207  Capacity (in LBAs):                    1310720 (5GiB)
00:19:26.207  Utilization (in LBAs):                 1310720 (5GiB)
00:19:26.207  Thin Provisioning:                     Not Supported
00:19:26.207  Per-NS Atomic Units:                   No
00:19:26.207  Maximum Single Source Range Length:    128
00:19:26.207  Maximum Copy Length:                   128
00:19:26.207  Maximum Source Range Count:            128
00:19:26.207  NGUID/EUI64 Never Reused:              No
00:19:26.207  Namespace Write Protected:             No
00:19:26.207  Number of LBA Formats:                 8
00:19:26.207  Current LBA Format:                    LBA Format #04
00:19:26.207  LBA Format #00: Data Size:   512  Metadata Size:     0
00:19:26.207  LBA Format #01: Data Size:   512  Metadata Size:     8
00:19:26.207  LBA Format #02: Data Size:   512  Metadata Size:    16
00:19:26.207  LBA Format #03: Data Size:   512  Metadata Size:    64
00:19:26.207  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:19:26.207  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:19:26.207  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:19:26.207  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:19:26.207  
00:19:26.207  NVM Specific Namespace Data
00:19:26.207  ===========================
00:19:26.207  Logical Block Storage Tag Mask:               0
00:19:26.207  Protection Information Capabilities:
00:19:26.207    16b Guard Protection Information Storage Tag Support:  No
00:19:26.207    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:19:26.207    Storage Tag Check Read Support:                        No
00:19:26.207  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:19:26.207  
00:19:26.207  real	0m0.880s
00:19:26.207  user	0m0.325s
00:19:26.207  sys	0m0.475s
00:19:26.207   13:56:08 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:26.207   13:56:08 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:19:26.207  ************************************
00:19:26.207  END TEST nvme_identify
00:19:26.207  ************************************
00:19:26.207   13:56:08 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:19:26.207   13:56:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:26.207   13:56:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:26.207   13:56:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:26.207  ************************************
00:19:26.207  START TEST nvme_perf
00:19:26.207  ************************************
00:19:26.207   13:56:08 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:19:26.207   13:56:08 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:19:27.585  Initializing NVMe Controllers
00:19:27.585  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:27.585  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:19:27.585  Initialization complete. Launching workers.
00:19:27.585  ========================================================
00:19:27.585                                                                             Latency(us)
00:19:27.585  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:27.585  PCIE (0000:00:10.0) NSID 1 from core  0:   91447.87    1071.65    1397.91     647.62    9229.02
00:19:27.585  ========================================================
00:19:27.585  Total                                  :   91447.87    1071.65    1397.91     647.62    9229.02
00:19:27.585  
00:19:27.585  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:19:27.585  =================================================================================
00:19:27.585    1.00000% :   768.488us
00:19:27.585   10.00000% :   936.229us
00:19:27.585   25.00000% :  1107.870us
00:19:27.585   50.00000% :  1388.739us
00:19:27.585   75.00000% :  1654.004us
00:19:27.585   90.00000% :  1833.448us
00:19:27.585   95.00000% :  1927.070us
00:19:27.585   98.00000% :  2090.910us
00:19:27.585   99.00000% :  2387.383us
00:19:27.585   99.50000% :  2699.459us
00:19:27.585   99.90000% :  5554.956us
00:19:27.585   99.99000% :  8862.964us
00:19:27.585   99.99900% :  9237.455us
00:19:27.585   99.99990% :  9237.455us
00:19:27.585   99.99999% :  9237.455us
00:19:27.585  
00:19:27.585  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:19:27.585  ==============================================================================
00:19:27.585         Range in us     Cumulative    IO count
00:19:27.585    647.558 -   651.459:    0.0044%  (        4)
00:19:27.585    651.459 -   655.360:    0.0055%  (        1)
00:19:27.585    655.360 -   659.261:    0.0098%  (        4)
00:19:27.585    659.261 -   663.162:    0.0120%  (        2)
00:19:27.585    663.162 -   667.063:    0.0164%  (        4)
00:19:27.585    667.063 -   670.964:    0.0219%  (        5)
00:19:27.585    670.964 -   674.865:    0.0317%  (        9)
00:19:27.585    674.865 -   678.766:    0.0394%  (        7)
00:19:27.585    678.766 -   682.667:    0.0470%  (        7)
00:19:27.585    682.667 -   686.568:    0.0590%  (       11)
00:19:27.585    686.568 -   690.469:    0.0634%  (        4)
00:19:27.585    690.469 -   694.370:    0.0809%  (       16)
00:19:27.585    694.370 -   698.270:    0.0951%  (       13)
00:19:27.585    698.270 -   702.171:    0.1170%  (       20)
00:19:27.585    702.171 -   706.072:    0.1323%  (       14)
00:19:27.585    706.072 -   709.973:    0.1454%  (       12)
00:19:27.585    709.973 -   713.874:    0.1640%  (       17)
00:19:27.585    713.874 -   717.775:    0.1903%  (       24)
00:19:27.585    717.775 -   721.676:    0.2274%  (       34)
00:19:27.585    721.676 -   725.577:    0.2646%  (       34)
00:19:27.585    725.577 -   729.478:    0.2996%  (       32)
00:19:27.585    729.478 -   733.379:    0.3324%  (       30)
00:19:27.585    733.379 -   737.280:    0.3740%  (       38)
00:19:27.585    737.280 -   741.181:    0.4450%  (       65)
00:19:27.585    741.181 -   745.082:    0.5019%  (       52)
00:19:27.585    745.082 -   748.983:    0.5806%  (       72)
00:19:27.585    748.983 -   752.884:    0.6254%  (       41)
00:19:27.585    752.884 -   756.785:    0.7129%  (       80)
00:19:27.585    756.785 -   760.686:    0.7949%  (       75)
00:19:27.585    760.686 -   764.587:    0.8955%  (       92)
00:19:27.585    764.587 -   768.488:    1.0071%  (      102)
00:19:27.585    768.488 -   772.389:    1.1033%  (       88)
00:19:27.585    772.389 -   776.290:    1.2203%  (      107)
00:19:27.585    776.290 -   780.190:    1.3263%  (       97)
00:19:27.585    780.190 -   784.091:    1.4554%  (      118)
00:19:27.585    784.091 -   787.992:    1.6073%  (      139)
00:19:27.585    787.992 -   791.893:    1.7604%  (      140)
00:19:27.585    791.893 -   795.794:    1.9113%  (      138)
00:19:27.586    795.794 -   799.695:    2.0874%  (      161)
00:19:27.586    799.695 -   803.596:    2.2470%  (      146)
00:19:27.586    803.596 -   807.497:    2.4351%  (      172)
00:19:27.586    807.497 -   811.398:    2.5969%  (      148)
00:19:27.586    811.398 -   815.299:    2.7500%  (      140)
00:19:27.586    815.299 -   819.200:    2.9337%  (      168)
00:19:27.586    819.200 -   823.101:    3.0966%  (      149)
00:19:27.586    823.101 -   827.002:    3.3055%  (      191)
00:19:27.586    827.002 -   830.903:    3.4837%  (      163)
00:19:27.586    830.903 -   834.804:    3.7067%  (      204)
00:19:27.586    834.804 -   838.705:    3.8915%  (      169)
00:19:27.586    838.705 -   842.606:    4.1026%  (      193)
00:19:27.586    842.606 -   846.507:    4.3081%  (      188)
00:19:27.586    846.507 -   850.408:    4.5170%  (      191)
00:19:27.586    850.408 -   854.309:    4.7313%  (      196)
00:19:27.586    854.309 -   858.210:    4.9303%  (      182)
00:19:27.586    858.210 -   862.110:    5.1304%  (      183)
00:19:27.586    862.110 -   866.011:    5.3578%  (      208)
00:19:27.586    866.011 -   869.912:    5.6038%  (      225)
00:19:27.586    869.912 -   873.813:    5.8586%  (      233)
00:19:27.586    873.813 -   877.714:    6.0729%  (      196)
00:19:27.586    877.714 -   881.615:    6.3026%  (      210)
00:19:27.586    881.615 -   885.516:    6.5355%  (      213)
00:19:27.586    885.516 -   889.417:    6.7760%  (      220)
00:19:27.586    889.417 -   893.318:    7.0450%  (      246)
00:19:27.586    893.318 -   897.219:    7.3085%  (      241)
00:19:27.586    897.219 -   901.120:    7.5873%  (      255)
00:19:27.586    901.120 -   905.021:    7.8771%  (      265)
00:19:27.586    905.021 -   908.922:    8.1231%  (      225)
00:19:27.586    908.922 -   912.823:    8.3866%  (      241)
00:19:27.586    912.823 -   916.724:    8.6622%  (      252)
00:19:27.586    916.724 -   920.625:    8.9771%  (      288)
00:19:27.586    920.625 -   924.526:    9.2669%  (      265)
00:19:27.586    924.526 -   928.427:    9.5358%  (      246)
00:19:27.586    928.427 -   932.328:    9.8453%  (      283)
00:19:27.586    932.328 -   936.229:   10.1689%  (      296)
00:19:27.586    936.229 -   940.130:   10.5046%  (      307)
00:19:27.586    940.130 -   944.030:   10.7867%  (      258)
00:19:27.586    944.030 -   947.931:   11.0841%  (      272)
00:19:27.586    947.931 -   951.832:   11.3826%  (      273)
00:19:27.586    951.832 -   955.733:   11.7096%  (      299)
00:19:27.586    955.733 -   959.634:   12.0376%  (      300)
00:19:27.586    959.634 -   963.535:   12.3667%  (      301)
00:19:27.586    963.535 -   967.436:   12.6707%  (      278)
00:19:27.586    967.436 -   971.337:   12.9987%  (      300)
00:19:27.586    971.337 -   975.238:   13.3224%  (      296)
00:19:27.586    975.238 -   979.139:   13.6406%  (      291)
00:19:27.586    979.139 -   983.040:   13.9927%  (      322)
00:19:27.586    983.040 -   986.941:   14.3098%  (      290)
00:19:27.586    986.941 -   990.842:   14.6597%  (      320)
00:19:27.586    990.842 -   994.743:   14.9811%  (      294)
00:19:27.586    994.743 -   998.644:   15.3081%  (      299)
00:19:27.586    998.644 -  1006.446:   16.0133%  (      645)
00:19:27.586   1006.446 -  1014.248:   16.6235%  (      558)
00:19:27.586   1014.248 -  1022.050:   17.3462%  (      661)
00:19:27.586   1022.050 -  1029.851:   17.9804%  (      580)
00:19:27.586   1029.851 -  1037.653:   18.6824%  (      642)
00:19:27.586   1037.653 -  1045.455:   19.3680%  (      627)
00:19:27.586   1045.455 -  1053.257:   20.0656%  (      638)
00:19:27.586   1053.257 -  1061.059:   20.7665%  (      641)
00:19:27.586   1061.059 -  1068.861:   21.4794%  (      652)
00:19:27.586   1068.861 -  1076.663:   22.1836%  (      644)
00:19:27.586   1076.663 -  1084.465:   22.8746%  (      632)
00:19:27.586   1084.465 -  1092.267:   23.5919%  (      656)
00:19:27.586   1092.267 -  1100.069:   24.2808%  (      630)
00:19:27.586   1100.069 -  1107.870:   25.0123%  (      669)
00:19:27.586   1107.870 -  1115.672:   25.7427%  (      668)
00:19:27.586   1115.672 -  1123.474:   26.4349%  (      633)
00:19:27.586   1123.474 -  1131.276:   27.1686%  (      671)
00:19:27.586   1131.276 -  1139.078:   27.8673%  (      639)
00:19:27.586   1139.078 -  1146.880:   28.6305%  (      698)
00:19:27.586   1146.880 -  1154.682:   29.3281%  (      638)
00:19:27.586   1154.682 -  1162.484:   30.0421%  (      653)
00:19:27.586   1162.484 -  1170.286:   30.7714%  (      667)
00:19:27.586   1170.286 -  1178.088:   31.4647%  (      634)
00:19:27.586   1178.088 -  1185.890:   32.1907%  (      664)
00:19:27.586   1185.890 -  1193.691:   32.8785%  (      629)
00:19:27.586   1193.691 -  1201.493:   33.5969%  (      657)
00:19:27.586   1201.493 -  1209.295:   34.3152%  (      657)
00:19:27.586   1209.295 -  1217.097:   35.0588%  (      680)
00:19:27.586   1217.097 -  1224.899:   35.7651%  (      646)
00:19:27.586   1224.899 -  1232.701:   36.4977%  (      670)
00:19:27.586   1232.701 -  1240.503:   37.1757%  (      620)
00:19:27.586   1240.503 -  1248.305:   37.9192%  (      680)
00:19:27.586   1248.305 -  1256.107:   38.5971%  (      620)
00:19:27.586   1256.107 -  1263.909:   39.2958%  (      639)
00:19:27.586   1263.909 -  1271.710:   40.0142%  (      657)
00:19:27.586   1271.710 -  1279.512:   40.7096%  (      636)
00:19:27.586   1279.512 -  1287.314:   41.4335%  (      662)
00:19:27.586   1287.314 -  1295.116:   42.1245%  (      632)
00:19:27.586   1295.116 -  1302.918:   42.8528%  (      666)
00:19:27.586   1302.918 -  1310.720:   43.5373%  (      626)
00:19:27.586   1310.720 -  1318.522:   44.2644%  (      665)
00:19:27.586   1318.522 -  1326.324:   44.9642%  (      640)
00:19:27.586   1326.324 -  1334.126:   45.6837%  (      658)
00:19:27.586   1334.126 -  1341.928:   46.4031%  (      658)
00:19:27.586   1341.928 -  1349.730:   47.1117%  (      648)
00:19:27.586   1349.730 -  1357.531:   47.8159%  (      644)
00:19:27.586   1357.531 -  1365.333:   48.5485%  (      670)
00:19:27.586   1365.333 -  1373.135:   49.2701%  (      660)
00:19:27.586   1373.135 -  1380.937:   49.9754%  (      645)
00:19:27.586   1380.937 -  1388.739:   50.6883%  (      652)
00:19:27.586   1388.739 -  1396.541:   51.3969%  (      648)
00:19:27.586   1396.541 -  1404.343:   52.1021%  (      645)
00:19:27.586   1404.343 -  1412.145:   52.8304%  (      666)
00:19:27.586   1412.145 -  1419.947:   53.5444%  (      653)
00:19:27.586   1419.947 -  1427.749:   54.2649%  (      659)
00:19:27.586   1427.749 -  1435.550:   55.0085%  (      680)
00:19:27.586   1435.550 -  1443.352:   55.7225%  (      653)
00:19:27.586   1443.352 -  1451.154:   56.4496%  (      665)
00:19:27.586   1451.154 -  1458.956:   57.1822%  (      670)
00:19:27.586   1458.956 -  1466.758:   57.8875%  (      645)
00:19:27.586   1466.758 -  1474.560:   58.6277%  (      677)
00:19:27.586   1474.560 -  1482.362:   59.3538%  (      664)
00:19:27.586   1482.362 -  1490.164:   60.0820%  (      666)
00:19:27.586   1490.164 -  1497.966:   60.8070%  (      663)
00:19:27.586   1497.966 -  1505.768:   61.5308%  (      662)
00:19:27.586   1505.768 -  1513.570:   62.2317%  (      641)
00:19:27.586   1513.570 -  1521.371:   62.9741%  (      679)
00:19:27.586   1521.371 -  1529.173:   63.6947%  (      659)
00:19:27.586   1529.173 -  1536.975:   64.4273%  (      670)
00:19:27.586   1536.975 -  1544.777:   65.1468%  (      658)
00:19:27.586   1544.777 -  1552.579:   65.8608%  (      653)
00:19:27.586   1552.579 -  1560.381:   66.5825%  (      660)
00:19:27.586   1560.381 -  1568.183:   67.3085%  (      664)
00:19:27.586   1568.183 -  1575.985:   68.0214%  (      652)
00:19:27.586   1575.985 -  1583.787:   68.7540%  (      670)
00:19:27.586   1583.787 -  1591.589:   69.4626%  (      648)
00:19:27.586   1591.589 -  1599.390:   70.1744%  (      651)
00:19:27.586   1599.390 -  1607.192:   70.9267%  (      688)
00:19:27.586   1607.192 -  1614.994:   71.6090%  (      624)
00:19:27.586   1614.994 -  1622.796:   72.3307%  (      660)
00:19:27.586   1622.796 -  1630.598:   73.0414%  (      650)
00:19:27.586   1630.598 -  1638.400:   73.7477%  (      646)
00:19:27.586   1638.400 -  1646.202:   74.4727%  (      663)
00:19:27.586   1646.202 -  1654.004:   75.1670%  (      635)
00:19:27.586   1654.004 -  1661.806:   75.8963%  (      667)
00:19:27.586   1661.806 -  1669.608:   76.5994%  (      643)
00:19:27.586   1669.608 -  1677.410:   77.2905%  (      632)
00:19:27.586   1677.410 -  1685.211:   78.0198%  (      667)
00:19:27.586   1685.211 -  1693.013:   78.7043%  (      626)
00:19:27.586   1693.013 -  1700.815:   79.3997%  (      636)
00:19:27.586   1700.815 -  1708.617:   80.1072%  (      647)
00:19:27.586   1708.617 -  1716.419:   80.8244%  (      656)
00:19:27.586   1716.419 -  1724.221:   81.5002%  (      618)
00:19:27.586   1724.221 -  1732.023:   82.2208%  (      659)
00:19:27.586   1732.023 -  1739.825:   82.8987%  (      620)
00:19:27.586   1739.825 -  1747.627:   83.5449%  (      591)
00:19:27.586   1747.627 -  1755.429:   84.2447%  (      640)
00:19:27.586   1755.429 -  1763.230:   84.8680%  (      570)
00:19:27.586   1763.230 -  1771.032:   85.5164%  (      593)
00:19:27.586   1771.032 -  1778.834:   86.1287%  (      560)
00:19:27.586   1778.834 -  1786.636:   86.7487%  (      567)
00:19:27.586   1786.636 -  1794.438:   87.3533%  (      553)
00:19:27.586   1794.438 -  1802.240:   87.9230%  (      521)
00:19:27.586   1802.240 -  1810.042:   88.4872%  (      516)
00:19:27.586   1810.042 -  1817.844:   89.0547%  (      519)
00:19:27.586   1817.844 -  1825.646:   89.5949%  (      494)
00:19:27.586   1825.646 -  1833.448:   90.1175%  (      478)
00:19:27.586   1833.448 -  1841.250:   90.6468%  (      484)
00:19:27.586   1841.250 -  1849.051:   91.1519%  (      462)
00:19:27.586   1849.051 -  1856.853:   91.6462%  (      452)
00:19:27.586   1856.853 -  1864.655:   92.1338%  (      446)
00:19:27.586   1864.655 -  1872.457:   92.6029%  (      429)
00:19:27.586   1872.457 -  1880.259:   93.0141%  (      376)
00:19:27.586   1880.259 -  1888.061:   93.4536%  (      402)
00:19:27.586   1888.061 -  1895.863:   93.8538%  (      366)
00:19:27.586   1895.863 -  1903.665:   94.2015%  (      318)
00:19:27.586   1903.665 -  1911.467:   94.5624%  (      330)
00:19:27.586   1911.467 -  1919.269:   94.8674%  (      279)
00:19:27.586   1919.269 -  1927.070:   95.1736%  (      280)
00:19:27.586   1927.070 -  1934.872:   95.4294%  (      234)
00:19:27.586   1934.872 -  1942.674:   95.6798%  (      229)
00:19:27.586   1942.674 -  1950.476:   95.9095%  (      210)
00:19:27.586   1950.476 -  1958.278:   96.1183%  (      191)
00:19:27.586   1958.278 -  1966.080:   96.3239%  (      188)
00:19:27.586   1966.080 -  1973.882:   96.5021%  (      163)
00:19:27.586   1973.882 -  1981.684:   96.6694%  (      153)
00:19:27.586   1981.684 -  1989.486:   96.8269%  (      144)
00:19:27.586   1989.486 -  1997.288:   96.9734%  (      134)
00:19:27.586   1997.288 -  2012.891:   97.2238%  (      229)
00:19:27.586   2012.891 -  2028.495:   97.4446%  (      202)
00:19:27.586   2028.495 -  2044.099:   97.6316%  (      171)
00:19:27.586   2044.099 -  2059.703:   97.8022%  (      156)
00:19:27.586   2059.703 -  2075.307:   97.9301%  (      117)
00:19:27.586   2075.307 -  2090.910:   98.0406%  (      101)
00:19:27.586   2090.910 -  2106.514:   98.1346%  (       86)
00:19:27.586   2106.514 -  2122.118:   98.2111%  (       70)
00:19:27.586   2122.118 -  2137.722:   98.2888%  (       71)
00:19:27.586   2137.722 -  2153.326:   98.3566%  (       62)
00:19:27.586   2153.326 -  2168.930:   98.4233%  (       61)
00:19:27.586   2168.930 -  2184.533:   98.4823%  (       54)
00:19:27.586   2184.533 -  2200.137:   98.5435%  (       56)
00:19:27.587   2200.137 -  2215.741:   98.6015%  (       53)
00:19:27.587   2215.741 -  2231.345:   98.6562%  (       50)
00:19:27.587   2231.345 -  2246.949:   98.7054%  (       45)
00:19:27.587   2246.949 -  2262.552:   98.7502%  (       41)
00:19:27.587   2262.552 -  2278.156:   98.7961%  (       42)
00:19:27.587   2278.156 -  2293.760:   98.8344%  (       35)
00:19:27.587   2293.760 -  2309.364:   98.8716%  (       34)
00:19:27.587   2309.364 -  2324.968:   98.9022%  (       28)
00:19:27.587   2324.968 -  2340.571:   98.9295%  (       25)
00:19:27.587   2340.571 -  2356.175:   98.9601%  (       28)
00:19:27.587   2356.175 -  2371.779:   98.9919%  (       29)
00:19:27.587   2371.779 -  2387.383:   99.0181%  (       24)
00:19:27.587   2387.383 -  2402.987:   99.0465%  (       26)
00:19:27.587   2402.987 -  2418.590:   99.0760%  (       27)
00:19:27.587   2418.590 -  2434.194:   99.1012%  (       23)
00:19:27.587   2434.194 -  2449.798:   99.1296%  (       26)
00:19:27.587   2449.798 -  2465.402:   99.1559%  (       24)
00:19:27.587   2465.402 -  2481.006:   99.1854%  (       27)
00:19:27.587   2481.006 -  2496.610:   99.2094%  (       22)
00:19:27.587   2496.610 -  2512.213:   99.2390%  (       27)
00:19:27.587   2512.213 -  2527.817:   99.2630%  (       22)
00:19:27.587   2527.817 -  2543.421:   99.2893%  (       24)
00:19:27.587   2543.421 -  2559.025:   99.3188%  (       27)
00:19:27.587   2559.025 -  2574.629:   99.3396%  (       19)
00:19:27.587   2574.629 -  2590.232:   99.3636%  (       22)
00:19:27.587   2590.232 -  2605.836:   99.3833%  (       18)
00:19:27.587   2605.836 -  2621.440:   99.4074%  (       22)
00:19:27.587   2621.440 -  2637.044:   99.4303%  (       21)
00:19:27.587   2637.044 -  2652.648:   99.4555%  (       23)
00:19:27.587   2652.648 -  2668.251:   99.4773%  (       20)
00:19:27.587   2668.251 -  2683.855:   99.4894%  (       11)
00:19:27.587   2683.855 -  2699.459:   99.5036%  (       13)
00:19:27.587   2699.459 -  2715.063:   99.5156%  (       11)
00:19:27.587   2715.063 -  2730.667:   99.5254%  (        9)
00:19:27.587   2730.667 -  2746.270:   99.5397%  (       13)
00:19:27.587   2746.270 -  2761.874:   99.5517%  (       11)
00:19:27.587   2761.874 -  2777.478:   99.5637%  (       11)
00:19:27.587   2777.478 -  2793.082:   99.5736%  (        9)
00:19:27.587   2793.082 -  2808.686:   99.5823%  (        8)
00:19:27.587   2808.686 -  2824.290:   99.5889%  (        6)
00:19:27.587   2824.290 -  2839.893:   99.5987%  (        9)
00:19:27.587   2839.893 -  2855.497:   99.6031%  (        4)
00:19:27.587   2855.497 -  2871.101:   99.6096%  (        6)
00:19:27.587   2871.101 -  2886.705:   99.6140%  (        4)
00:19:27.587   2886.705 -  2902.309:   99.6195%  (        5)
00:19:27.587   2902.309 -  2917.912:   99.6239%  (        4)
00:19:27.587   2917.912 -  2933.516:   99.6282%  (        4)
00:19:27.587   2933.516 -  2949.120:   99.6326%  (        4)
00:19:27.587   2949.120 -  2964.724:   99.6370%  (        4)
00:19:27.587   2964.724 -  2980.328:   99.6403%  (        3)
00:19:27.587   2980.328 -  2995.931:   99.6457%  (        5)
00:19:27.587   2995.931 -  3011.535:   99.6501%  (        4)
00:19:27.587   3011.535 -  3027.139:   99.6545%  (        4)
00:19:27.587   3027.139 -  3042.743:   99.6610%  (        6)
00:19:27.587   3042.743 -  3058.347:   99.6643%  (        3)
00:19:27.587   3058.347 -  3073.950:   99.6698%  (        5)
00:19:27.587   3073.950 -  3089.554:   99.6742%  (        4)
00:19:27.587   3089.554 -  3105.158:   99.6796%  (        5)
00:19:27.587   3105.158 -  3120.762:   99.6829%  (        3)
00:19:27.587   3120.762 -  3136.366:   99.6895%  (        6)
00:19:27.587   3136.366 -  3151.970:   99.6927%  (        3)
00:19:27.587   3151.970 -  3167.573:   99.6960%  (        3)
00:19:27.587   3167.573 -  3183.177:   99.7026%  (        6)
00:19:27.587   3183.177 -  3198.781:   99.7048%  (        2)
00:19:27.587   3198.781 -  3214.385:   99.7113%  (        6)
00:19:27.587   3214.385 -  3229.989:   99.7146%  (        3)
00:19:27.587   3229.989 -  3245.592:   99.7212%  (        6)
00:19:27.587   3245.592 -  3261.196:   99.7245%  (        3)
00:19:27.587   3261.196 -  3276.800:   99.7310%  (        6)
00:19:27.587   3276.800 -  3292.404:   99.7343%  (        3)
00:19:27.587   3292.404 -  3308.008:   99.7398%  (        5)
00:19:27.587   3308.008 -  3323.611:   99.7430%  (        3)
00:19:27.587   3323.611 -  3339.215:   99.7485%  (        5)
00:19:27.587   3339.215 -  3354.819:   99.7507%  (        2)
00:19:27.587   3354.819 -  3370.423:   99.7551%  (        4)
00:19:27.587   3370.423 -  3386.027:   99.7573%  (        2)
00:19:27.587   3386.027 -  3401.630:   99.7616%  (        4)
00:19:27.587   3401.630 -  3417.234:   99.7638%  (        2)
00:19:27.587   3417.234 -  3432.838:   99.7671%  (        3)
00:19:27.587   3432.838 -  3448.442:   99.7693%  (        2)
00:19:27.587   3448.442 -  3464.046:   99.7715%  (        2)
00:19:27.587   3464.046 -  3479.650:   99.7748%  (        3)
00:19:27.587   3479.650 -  3495.253:   99.7758%  (        1)
00:19:27.587   3495.253 -  3510.857:   99.7780%  (        2)
00:19:27.587   3510.857 -  3526.461:   99.7813%  (        3)
00:19:27.587   3526.461 -  3542.065:   99.7835%  (        2)
00:19:27.587   3542.065 -  3557.669:   99.7857%  (        2)
00:19:27.587   3557.669 -  3573.272:   99.7879%  (        2)
00:19:27.587   3573.272 -  3588.876:   99.7901%  (        2)
00:19:27.587   3588.876 -  3604.480:   99.7933%  (        3)
00:19:27.587   3604.480 -  3620.084:   99.7955%  (        2)
00:19:27.587   3620.084 -  3635.688:   99.7988%  (        3)
00:19:27.587   3635.688 -  3651.291:   99.8010%  (        2)
00:19:27.587   3651.291 -  3666.895:   99.8021%  (        1)
00:19:27.587   3666.895 -  3682.499:   99.8054%  (        3)
00:19:27.587   3682.499 -  3698.103:   99.8076%  (        2)
00:19:27.587   3698.103 -  3713.707:   99.8108%  (        3)
00:19:27.587   3713.707 -  3729.310:   99.8141%  (        3)
00:19:27.587   3729.310 -  3744.914:   99.8152%  (        1)
00:19:27.587   3744.914 -  3760.518:   99.8185%  (        3)
00:19:27.587   3760.518 -  3776.122:   99.8207%  (        2)
00:19:27.587   3776.122 -  3791.726:   99.8229%  (        2)
00:19:27.587   3791.726 -  3807.330:   99.8261%  (        3)
00:19:27.587   3807.330 -  3822.933:   99.8294%  (        3)
00:19:27.587   3822.933 -  3838.537:   99.8316%  (        2)
00:19:27.587   3838.537 -  3854.141:   99.8338%  (        2)
00:19:27.587   3854.141 -  3869.745:   99.8360%  (        2)
00:19:27.587   3885.349 -  3900.952:   99.8382%  (        2)
00:19:27.587   3900.952 -  3916.556:   99.8393%  (        1)
00:19:27.587   3916.556 -  3932.160:   99.8415%  (        2)
00:19:27.587   3932.160 -  3947.764:   99.8436%  (        2)
00:19:27.587   3963.368 -  3978.971:   99.8458%  (        2)
00:19:27.587   3978.971 -  3994.575:   99.8469%  (        1)
00:19:27.587   3994.575 -  4025.783:   99.8480%  (        1)
00:19:27.587   4025.783 -  4056.990:   99.8491%  (        1)
00:19:27.587   4056.990 -  4088.198:   99.8502%  (        1)
00:19:27.587   4088.198 -  4119.406:   99.8513%  (        1)
00:19:27.587   4119.406 -  4150.613:   99.8535%  (        2)
00:19:27.587   4150.613 -  4181.821:   99.8546%  (        1)
00:19:27.587   4181.821 -  4213.029:   99.8557%  (        1)
00:19:27.587   4213.029 -  4244.236:   99.8589%  (        3)
00:19:27.587   4244.236 -  4275.444:   99.8600%  (        1)
00:19:27.587   4275.444 -  4306.651:   99.8622%  (        2)
00:19:27.587   4306.651 -  4337.859:   99.8633%  (        1)
00:19:27.587   4337.859 -  4369.067:   99.8644%  (        1)
00:19:27.587   4369.067 -  4400.274:   99.8655%  (        1)
00:19:27.587   4400.274 -  4431.482:   99.8666%  (        1)
00:19:27.587   4431.482 -  4462.690:   99.8677%  (        1)
00:19:27.587   4462.690 -  4493.897:   99.8688%  (        1)
00:19:27.587   4525.105 -  4556.312:   99.8699%  (        1)
00:19:27.587   4556.312 -  4587.520:   99.8710%  (        1)
00:19:27.587   4587.520 -  4618.728:   99.8721%  (        1)
00:19:27.587   4618.728 -  4649.935:   99.8732%  (        1)
00:19:27.587   4649.935 -  4681.143:   99.8743%  (        1)
00:19:27.587   4681.143 -  4712.350:   99.8753%  (        1)
00:19:27.587   4743.558 -  4774.766:   99.8764%  (        1)
00:19:27.587   4774.766 -  4805.973:   99.8775%  (        1)
00:19:27.587   4805.973 -  4837.181:   99.8786%  (        1)
00:19:27.587   4837.181 -  4868.389:   99.8797%  (        1)
00:19:27.587   4868.389 -  4899.596:   99.8808%  (        1)
00:19:27.587   4930.804 -  4962.011:   99.8819%  (        1)
00:19:27.587   4962.011 -  4993.219:   99.8841%  (        2)
00:19:27.587   5024.427 -  5055.634:   99.8852%  (        1)
00:19:27.587   5055.634 -  5086.842:   99.8863%  (        1)
00:19:27.587   5086.842 -  5118.050:   99.8874%  (        1)
00:19:27.587   5118.050 -  5149.257:   99.8885%  (        1)
00:19:27.587   5149.257 -  5180.465:   99.8896%  (        1)
00:19:27.587   5211.672 -  5242.880:   99.8918%  (        2)
00:19:27.587   5242.880 -  5274.088:   99.8928%  (        1)
00:19:27.587   5305.295 -  5336.503:   99.8939%  (        1)
00:19:27.587   5336.503 -  5367.710:   99.8950%  (        1)
00:19:27.587   5367.710 -  5398.918:   99.8961%  (        1)
00:19:27.587   5398.918 -  5430.126:   99.8972%  (        1)
00:19:27.587   5461.333 -  5492.541:   99.8983%  (        1)
00:19:27.587   5492.541 -  5523.749:   99.8994%  (        1)
00:19:27.587   5523.749 -  5554.956:   99.9005%  (        1)
00:19:27.587   5554.956 -  5586.164:   99.9016%  (        1)
00:19:27.587   5586.164 -  5617.371:   99.9027%  (        1)
00:19:27.587   5617.371 -  5648.579:   99.9038%  (        1)
00:19:27.587   5648.579 -  5679.787:   99.9049%  (        1)
00:19:27.587   5679.787 -  5710.994:   99.9060%  (        1)
00:19:27.587   5742.202 -  5773.410:   99.9071%  (        1)
00:19:27.587   5773.410 -  5804.617:   99.9082%  (        1)
00:19:27.587   5835.825 -  5867.032:   99.9092%  (        1)
00:19:27.587   5867.032 -  5898.240:   99.9103%  (        1)
00:19:27.587   5898.240 -  5929.448:   99.9114%  (        1)
00:19:27.587   5960.655 -  5991.863:   99.9125%  (        1)
00:19:27.587   5991.863 -  6023.070:   99.9136%  (        1)
00:19:27.587   6023.070 -  6054.278:   99.9147%  (        1)
00:19:27.587   6085.486 -  6116.693:   99.9158%  (        1)
00:19:27.587   6116.693 -  6147.901:   99.9169%  (        1)
00:19:27.587   6147.901 -  6179.109:   99.9180%  (        1)
00:19:27.587   6210.316 -  6241.524:   99.9191%  (        1)
00:19:27.587   6241.524 -  6272.731:   99.9202%  (        1)
00:19:27.587   6303.939 -  6335.147:   99.9213%  (        1)
00:19:27.587   6335.147 -  6366.354:   99.9224%  (        1)
00:19:27.587   6366.354 -  6397.562:   99.9235%  (        1)
00:19:27.587   6397.562 -  6428.770:   99.9246%  (        1)
00:19:27.587   6428.770 -  6459.977:   99.9256%  (        1)
00:19:27.587   6459.977 -  6491.185:   99.9267%  (        1)
00:19:27.587   6522.392 -  6553.600:   99.9278%  (        1)
00:19:27.587   6553.600 -  6584.808:   99.9289%  (        1)
00:19:27.587   6616.015 -  6647.223:   99.9300%  (        1)
00:19:27.587   6709.638 -  6740.846:   99.9311%  (        1)
00:19:27.587   6740.846 -  6772.053:   99.9322%  (        1)
00:19:27.587   6772.053 -  6803.261:   99.9333%  (        1)
00:19:27.587   6803.261 -  6834.469:   99.9344%  (        1)
00:19:27.587   6865.676 -  6896.884:   99.9355%  (        1)
00:19:27.587   6896.884 -  6928.091:   99.9366%  (        1)
00:19:27.587   6928.091 -  6959.299:   99.9377%  (        1)
00:19:27.587   6959.299 -  6990.507:   99.9388%  (        1)
00:19:27.587   6990.507 -  7021.714:   99.9399%  (        1)
00:19:27.587   7021.714 -  7052.922:   99.9410%  (        1)
00:19:27.587   7052.922 -  7084.130:   99.9420%  (        1)
00:19:27.587   7115.337 -  7146.545:   99.9431%  (        1)
00:19:27.587   7177.752 -  7208.960:   99.9442%  (        1)
00:19:27.587   7208.960 -  7240.168:   99.9453%  (        1)
00:19:27.588   7271.375 -  7302.583:   99.9464%  (        1)
00:19:27.588   7302.583 -  7333.790:   99.9475%  (        1)
00:19:27.588   7333.790 -  7364.998:   99.9486%  (        1)
00:19:27.588   7364.998 -  7396.206:   99.9497%  (        1)
00:19:27.588   7427.413 -  7458.621:   99.9508%  (        1)
00:19:27.588   7489.829 -  7521.036:   99.9519%  (        1)
00:19:27.588   7521.036 -  7552.244:   99.9530%  (        1)
00:19:27.588   7552.244 -  7583.451:   99.9541%  (        1)
00:19:27.588   7614.659 -  7645.867:   99.9552%  (        1)
00:19:27.588   7645.867 -  7677.074:   99.9563%  (        1)
00:19:27.588   7677.074 -  7708.282:   99.9574%  (        1)
00:19:27.588   7739.490 -  7770.697:   99.9584%  (        1)
00:19:27.588   7770.697 -  7801.905:   99.9595%  (        1)
00:19:27.588   7833.112 -  7864.320:   99.9617%  (        2)
00:19:27.588   7864.320 -  7895.528:   99.9628%  (        1)
00:19:27.588   7926.735 -  7957.943:   99.9639%  (        1)
00:19:27.588   7957.943 -  7989.150:   99.9650%  (        1)
00:19:27.588   7989.150 -  8051.566:   99.9672%  (        2)
00:19:27.588   8051.566 -  8113.981:   99.9683%  (        1)
00:19:27.588   8113.981 -  8176.396:   99.9705%  (        2)
00:19:27.588   8176.396 -  8238.811:   99.9727%  (        2)
00:19:27.588   8238.811 -  8301.227:   99.9738%  (        1)
00:19:27.588   8301.227 -  8363.642:   99.9759%  (        2)
00:19:27.588   8363.642 -  8426.057:   99.9781%  (        2)
00:19:27.588   8426.057 -  8488.472:   99.9792%  (        1)
00:19:27.588   8488.472 -  8550.888:   99.9814%  (        2)
00:19:27.588   8550.888 -  8613.303:   99.9836%  (        2)
00:19:27.588   8613.303 -  8675.718:   99.9858%  (        2)
00:19:27.588   8675.718 -  8738.133:   99.9869%  (        1)
00:19:27.588   8738.133 -  8800.549:   99.9891%  (        2)
00:19:27.588   8800.549 -  8862.964:   99.9902%  (        1)
00:19:27.588   8862.964 -  8925.379:   99.9923%  (        2)
00:19:27.588   8925.379 -  8987.794:   99.9934%  (        1)
00:19:27.588   8987.794 -  9050.210:   99.9945%  (        1)
00:19:27.588   9050.210 -  9112.625:   99.9967%  (        2)
00:19:27.588   9112.625 -  9175.040:   99.9989%  (        2)
00:19:27.588   9175.040 -  9237.455:  100.0000%  (        1)
00:19:27.588  
00:19:27.847   13:56:10 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:19:29.223  Initializing NVMe Controllers
00:19:29.223  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:29.223  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:19:29.223  Initialization complete. Launching workers.
00:19:29.223  ========================================================
00:19:29.223                                                                             Latency(us)
00:19:29.223  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:29.223  PCIE (0000:00:10.0) NSID 1 from core  0:   75349.03     883.00    1698.64     520.64    9004.42
00:19:29.223  ========================================================
00:19:29.223  Total                                  :   75349.03     883.00    1698.64     520.64    9004.42
00:19:29.223  
00:19:29.223  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:19:29.223  =================================================================================
00:19:29.223    1.00000% :  1092.267us
00:19:29.223   10.00000% :  1341.928us
00:19:29.223   25.00000% :  1458.956us
00:19:29.223   50.00000% :  1614.994us
00:19:29.223   75.00000% :  1856.853us
00:19:29.223   90.00000% :  2153.326us
00:19:29.223   95.00000% :  2371.779us
00:19:29.223   98.00000% :  2746.270us
00:19:29.223   99.00000% :  3089.554us
00:19:29.223   99.50000% :  3557.669us
00:19:29.223   99.90000% :  5430.126us
00:19:29.223   99.99000% :  6803.261us
00:19:29.223   99.99900% :  9050.210us
00:19:29.223   99.99990% :  9050.210us
00:19:29.223   99.99999% :  9050.210us
00:19:29.223  
00:19:29.223  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:19:29.223  ==============================================================================
00:19:29.223         Range in us     Cumulative    IO count
00:19:29.223    518.827 -   522.728:    0.0013%  (        1)
00:19:29.223    608.549 -   612.450:    0.0027%  (        1)
00:19:29.223    612.450 -   616.350:    0.0040%  (        1)
00:19:29.223    635.855 -   639.756:    0.0053%  (        1)
00:19:29.223    647.558 -   651.459:    0.0066%  (        1)
00:19:29.223    663.162 -   667.063:    0.0093%  (        2)
00:19:29.223    674.865 -   678.766:    0.0106%  (        1)
00:19:29.223    678.766 -   682.667:    0.0119%  (        1)
00:19:29.223    686.568 -   690.469:    0.0133%  (        1)
00:19:29.223    706.072 -   709.973:    0.0146%  (        1)
00:19:29.223    713.874 -   717.775:    0.0159%  (        1)
00:19:29.223    717.775 -   721.676:    0.0186%  (        2)
00:19:29.223    721.676 -   725.577:    0.0199%  (        1)
00:19:29.223    725.577 -   729.478:    0.0212%  (        1)
00:19:29.223    729.478 -   733.379:    0.0226%  (        1)
00:19:29.223    733.379 -   737.280:    0.0239%  (        1)
00:19:29.223    741.181 -   745.082:    0.0265%  (        2)
00:19:29.223    745.082 -   748.983:    0.0279%  (        1)
00:19:29.223    748.983 -   752.884:    0.0292%  (        1)
00:19:29.223    752.884 -   756.785:    0.0318%  (        2)
00:19:29.223    756.785 -   760.686:    0.0345%  (        2)
00:19:29.223    760.686 -   764.587:    0.0358%  (        1)
00:19:29.223    764.587 -   768.488:    0.0385%  (        2)
00:19:29.223    768.488 -   772.389:    0.0411%  (        2)
00:19:29.223    776.290 -   780.190:    0.0425%  (        1)
00:19:29.224    780.190 -   784.091:    0.0478%  (        4)
00:19:29.224    791.893 -   795.794:    0.0491%  (        1)
00:19:29.224    795.794 -   799.695:    0.0517%  (        2)
00:19:29.224    799.695 -   803.596:    0.0544%  (        2)
00:19:29.224    811.398 -   815.299:    0.0557%  (        1)
00:19:29.224    815.299 -   819.200:    0.0597%  (        3)
00:19:29.224    819.200 -   823.101:    0.0637%  (        3)
00:19:29.224    823.101 -   827.002:    0.0663%  (        2)
00:19:29.224    827.002 -   830.903:    0.0716%  (        4)
00:19:29.224    830.903 -   834.804:    0.0730%  (        1)
00:19:29.224    834.804 -   838.705:    0.0743%  (        1)
00:19:29.224    838.705 -   842.606:    0.0756%  (        1)
00:19:29.224    842.606 -   846.507:    0.0823%  (        5)
00:19:29.224    846.507 -   850.408:    0.0836%  (        1)
00:19:29.224    850.408 -   854.309:    0.0849%  (        1)
00:19:29.224    854.309 -   858.210:    0.0889%  (        3)
00:19:29.224    858.210 -   862.110:    0.0902%  (        1)
00:19:29.224    862.110 -   866.011:    0.0915%  (        1)
00:19:29.224    866.011 -   869.912:    0.0955%  (        3)
00:19:29.224    869.912 -   873.813:    0.0969%  (        1)
00:19:29.224    873.813 -   877.714:    0.1008%  (        3)
00:19:29.224    877.714 -   881.615:    0.1048%  (        3)
00:19:29.224    881.615 -   885.516:    0.1075%  (        2)
00:19:29.224    885.516 -   889.417:    0.1101%  (        2)
00:19:29.224    889.417 -   893.318:    0.1168%  (        5)
00:19:29.224    893.318 -   897.219:    0.1234%  (        5)
00:19:29.224    897.219 -   901.120:    0.1260%  (        2)
00:19:29.224    901.120 -   905.021:    0.1287%  (        2)
00:19:29.224    905.021 -   908.922:    0.1340%  (        4)
00:19:29.224    908.922 -   912.823:    0.1420%  (        6)
00:19:29.224    912.823 -   916.724:    0.1446%  (        2)
00:19:29.224    916.724 -   920.625:    0.1526%  (        6)
00:19:29.224    920.625 -   924.526:    0.1579%  (        4)
00:19:29.224    924.526 -   928.427:    0.1632%  (        4)
00:19:29.224    928.427 -   932.328:    0.1725%  (        7)
00:19:29.224    932.328 -   936.229:    0.1818%  (        7)
00:19:29.224    936.229 -   940.130:    0.1950%  (       10)
00:19:29.224    940.130 -   944.030:    0.2070%  (        9)
00:19:29.224    944.030 -   947.931:    0.2202%  (       10)
00:19:29.224    947.931 -   951.832:    0.2348%  (       11)
00:19:29.224    951.832 -   955.733:    0.2402%  (        4)
00:19:29.224    955.733 -   959.634:    0.2521%  (        9)
00:19:29.224    959.634 -   963.535:    0.2640%  (        9)
00:19:29.224    963.535 -   967.436:    0.2786%  (       11)
00:19:29.224    967.436 -   971.337:    0.2879%  (        7)
00:19:29.224    971.337 -   975.238:    0.3052%  (       13)
00:19:29.224    975.238 -   979.139:    0.3131%  (        6)
00:19:29.224    979.139 -   983.040:    0.3264%  (       10)
00:19:29.224    983.040 -   986.941:    0.3410%  (       11)
00:19:29.224    986.941 -   990.842:    0.3529%  (        9)
00:19:29.224    990.842 -   994.743:    0.3649%  (        9)
00:19:29.224    994.743 -   998.644:    0.3755%  (        8)
00:19:29.224    998.644 -  1006.446:    0.4325%  (       43)
00:19:29.224   1006.446 -  1014.248:    0.4697%  (       28)
00:19:29.224   1014.248 -  1022.050:    0.5108%  (       31)
00:19:29.224   1022.050 -  1029.851:    0.5612%  (       38)
00:19:29.224   1029.851 -  1037.653:    0.6103%  (       37)
00:19:29.224   1037.653 -  1045.455:    0.6607%  (       38)
00:19:29.224   1045.455 -  1053.257:    0.7151%  (       41)
00:19:29.224   1053.257 -  1061.059:    0.7709%  (       42)
00:19:29.224   1061.059 -  1068.861:    0.8359%  (       49)
00:19:29.224   1068.861 -  1076.663:    0.9036%  (       51)
00:19:29.224   1076.663 -  1084.465:    0.9832%  (       60)
00:19:29.224   1084.465 -  1092.267:    1.0734%  (       68)
00:19:29.224   1092.267 -  1100.069:    1.1490%  (       57)
00:19:29.224   1100.069 -  1107.870:    1.2233%  (       56)
00:19:29.224   1107.870 -  1115.672:    1.3321%  (       82)
00:19:29.224   1115.672 -  1123.474:    1.4462%  (       86)
00:19:29.224   1123.474 -  1131.276:    1.5431%  (       73)
00:19:29.224   1131.276 -  1139.078:    1.6466%  (       78)
00:19:29.224   1139.078 -  1146.880:    1.7527%  (       80)
00:19:29.224   1146.880 -  1154.682:    1.8934%  (      106)
00:19:29.224   1154.682 -  1162.484:    2.0459%  (      115)
00:19:29.224   1162.484 -  1170.286:    2.2158%  (      128)
00:19:29.224   1170.286 -  1178.088:    2.3763%  (      121)
00:19:29.224   1178.088 -  1185.890:    2.5382%  (      122)
00:19:29.224   1185.890 -  1193.691:    2.7000%  (      122)
00:19:29.224   1193.691 -  1201.493:    2.9084%  (      157)
00:19:29.224   1201.493 -  1209.295:    3.0888%  (      136)
00:19:29.224   1209.295 -  1217.097:    3.3303%  (      182)
00:19:29.224   1217.097 -  1224.899:    3.5439%  (      161)
00:19:29.224   1224.899 -  1232.701:    3.8185%  (      207)
00:19:29.224   1232.701 -  1240.503:    4.1104%  (      220)
00:19:29.224   1240.503 -  1248.305:    4.4076%  (      224)
00:19:29.224   1248.305 -  1256.107:    4.7805%  (      281)
00:19:29.224   1256.107 -  1263.909:    5.1706%  (      294)
00:19:29.224   1263.909 -  1271.710:    5.5514%  (      287)
00:19:29.224   1271.710 -  1279.512:    5.9693%  (      315)
00:19:29.224   1279.512 -  1287.314:    6.3979%  (      323)
00:19:29.224   1287.314 -  1295.116:    6.8994%  (      378)
00:19:29.224   1295.116 -  1302.918:    7.4076%  (      383)
00:19:29.224   1302.918 -  1310.720:    7.9794%  (      431)
00:19:29.224   1310.720 -  1318.522:    8.5990%  (      467)
00:19:29.224   1318.522 -  1326.324:    9.2903%  (      521)
00:19:29.224   1326.324 -  1334.126:    9.9484%  (      496)
00:19:29.224   1334.126 -  1341.928:   10.7219%  (      583)
00:19:29.224   1341.928 -  1349.730:   11.5605%  (      632)
00:19:29.224   1349.730 -  1357.531:   12.4003%  (      633)
00:19:29.224   1357.531 -  1365.333:   13.2203%  (      618)
00:19:29.224   1365.333 -  1373.135:   14.1013%  (      664)
00:19:29.224   1373.135 -  1380.937:   14.9942%  (      673)
00:19:29.224   1380.937 -  1388.739:   16.0252%  (      777)
00:19:29.224   1388.739 -  1396.541:   17.0070%  (      740)
00:19:29.224   1396.541 -  1404.343:   17.9623%  (      720)
00:19:29.224   1404.343 -  1412.145:   19.0025%  (      784)
00:19:29.224   1412.145 -  1419.947:   20.0454%  (      786)
00:19:29.224   1419.947 -  1427.749:   21.0949%  (      791)
00:19:29.224   1427.749 -  1435.550:   22.1616%  (      804)
00:19:29.224   1435.550 -  1443.352:   23.2987%  (      857)
00:19:29.224   1443.352 -  1451.154:   24.4769%  (      888)
00:19:29.224   1451.154 -  1458.956:   25.6617%  (      893)
00:19:29.224   1458.956 -  1466.758:   26.8453%  (      892)
00:19:29.224   1466.758 -  1474.560:   28.0991%  (      945)
00:19:29.224   1474.560 -  1482.362:   29.3596%  (      950)
00:19:29.224   1482.362 -  1490.164:   30.5431%  (      892)
00:19:29.224   1490.164 -  1497.966:   31.8447%  (      981)
00:19:29.224   1497.966 -  1505.768:   33.0547%  (      912)
00:19:29.224   1505.768 -  1513.570:   34.2900%  (      931)
00:19:29.224   1513.570 -  1521.371:   35.6738%  (     1043)
00:19:29.224   1521.371 -  1529.173:   36.9675%  (      975)
00:19:29.224   1529.173 -  1536.975:   38.2027%  (      931)
00:19:29.224   1536.975 -  1544.777:   39.4685%  (      954)
00:19:29.224   1544.777 -  1552.579:   40.7316%  (      952)
00:19:29.224   1552.579 -  1560.381:   41.9443%  (      914)
00:19:29.224   1560.381 -  1568.183:   43.1344%  (      897)
00:19:29.224   1568.183 -  1575.985:   44.3140%  (      889)
00:19:29.224   1575.985 -  1583.787:   45.4431%  (      851)
00:19:29.224   1583.787 -  1591.589:   46.6359%  (      899)
00:19:29.224   1591.589 -  1599.390:   47.8008%  (      878)
00:19:29.224   1599.390 -  1607.192:   48.8530%  (      793)
00:19:29.224   1607.192 -  1614.994:   50.0166%  (      877)
00:19:29.224   1614.994 -  1622.796:   51.1072%  (      822)
00:19:29.224   1622.796 -  1630.598:   52.2138%  (      834)
00:19:29.224   1630.598 -  1638.400:   53.3150%  (      830)
00:19:29.224   1638.400 -  1646.202:   54.3367%  (      770)
00:19:29.224   1646.202 -  1654.004:   55.2933%  (      721)
00:19:29.224   1654.004 -  1661.806:   56.2964%  (      756)
00:19:29.224   1661.806 -  1669.608:   57.2795%  (      741)
00:19:29.224   1669.608 -  1677.410:   58.2866%  (      759)
00:19:29.224   1677.410 -  1685.211:   59.3427%  (      796)
00:19:29.224   1685.211 -  1693.013:   60.2409%  (      677)
00:19:29.224   1693.013 -  1700.815:   61.1458%  (      682)
00:19:29.224   1700.815 -  1708.617:   61.9658%  (      618)
00:19:29.224   1708.617 -  1716.419:   62.9649%  (      753)
00:19:29.224   1716.419 -  1724.221:   63.7981%  (      628)
00:19:29.224   1724.221 -  1732.023:   64.5451%  (      563)
00:19:29.224   1732.023 -  1739.825:   65.4301%  (      667)
00:19:29.224   1739.825 -  1747.627:   66.2036%  (      583)
00:19:29.224   1747.627 -  1755.429:   66.9002%  (      525)
00:19:29.224   1755.429 -  1763.230:   67.5875%  (      518)
00:19:29.224   1763.230 -  1771.032:   68.2549%  (      503)
00:19:29.224   1771.032 -  1778.834:   68.9302%  (      509)
00:19:29.224   1778.834 -  1786.636:   69.5976%  (      503)
00:19:29.224   1786.636 -  1794.438:   70.2782%  (      513)
00:19:29.224   1794.438 -  1802.240:   70.8554%  (      435)
00:19:29.224   1802.240 -  1810.042:   71.5268%  (      506)
00:19:29.224   1810.042 -  1817.844:   72.1583%  (      476)
00:19:29.224   1817.844 -  1825.646:   72.8191%  (      498)
00:19:29.224   1825.646 -  1833.448:   73.4692%  (      490)
00:19:29.224   1833.448 -  1841.250:   74.0848%  (      464)
00:19:29.224   1841.250 -  1849.051:   74.6580%  (      432)
00:19:29.224   1849.051 -  1856.853:   75.2219%  (      425)
00:19:29.224   1856.853 -  1864.655:   75.8203%  (      451)
00:19:29.224   1864.655 -  1872.457:   76.3364%  (      389)
00:19:29.224   1872.457 -  1880.259:   76.9189%  (      439)
00:19:29.224   1880.259 -  1888.061:   77.4496%  (      400)
00:19:29.224   1888.061 -  1895.863:   77.9684%  (      391)
00:19:29.224   1895.863 -  1903.665:   78.4354%  (      352)
00:19:29.224   1903.665 -  1911.467:   78.8812%  (      336)
00:19:29.224   1911.467 -  1919.269:   79.3682%  (      367)
00:19:29.224   1919.269 -  1927.070:   79.7914%  (      319)
00:19:29.224   1927.070 -  1934.872:   80.2823%  (      370)
00:19:29.224   1934.872 -  1942.674:   80.7069%  (      320)
00:19:29.224   1942.674 -  1950.476:   81.1647%  (      345)
00:19:29.224   1950.476 -  1958.278:   81.5773%  (      311)
00:19:29.224   1958.278 -  1966.080:   81.9939%  (      314)
00:19:29.224   1966.080 -  1973.882:   82.4331%  (      331)
00:19:29.224   1973.882 -  1981.684:   82.8510%  (      315)
00:19:29.224   1981.684 -  1989.486:   83.2252%  (      282)
00:19:29.224   1989.486 -  1997.288:   83.6538%  (      323)
00:19:29.224   1997.288 -  2012.891:   84.4804%  (      623)
00:19:29.224   2012.891 -  2028.495:   85.2738%  (      598)
00:19:29.224   2028.495 -  2044.099:   85.9584%  (      516)
00:19:29.224   2044.099 -  2059.703:   86.6974%  (      557)
00:19:29.224   2059.703 -  2075.307:   87.3502%  (      492)
00:19:29.224   2075.307 -  2090.910:   87.9566%  (      457)
00:19:29.224   2090.910 -  2106.514:   88.5629%  (      457)
00:19:29.224   2106.514 -  2122.118:   89.1706%  (      458)
00:19:29.224   2122.118 -  2137.722:   89.7571%  (      442)
00:19:29.224   2137.722 -  2153.326:   90.2904%  (      402)
00:19:29.224   2153.326 -  2168.930:   90.7601%  (      354)
00:19:29.224   2168.930 -  2184.533:   91.2126%  (      341)
00:19:29.224   2184.533 -  2200.137:   91.6690%  (      344)
00:19:29.224   2200.137 -  2215.741:   92.0803%  (      310)
00:19:29.224   2215.741 -  2231.345:   92.4810%  (      302)
00:19:29.224   2231.345 -  2246.949:   92.8684%  (      292)
00:19:29.225   2246.949 -  2262.552:   93.2028%  (      252)
00:19:29.225   2262.552 -  2278.156:   93.5491%  (      261)
00:19:29.225   2278.156 -  2293.760:   93.8529%  (      229)
00:19:29.225   2293.760 -  2309.364:   94.1475%  (      222)
00:19:29.225   2309.364 -  2324.968:   94.4168%  (      203)
00:19:29.225   2324.968 -  2340.571:   94.7087%  (      220)
00:19:29.225   2340.571 -  2356.175:   94.9847%  (      208)
00:19:29.225   2356.175 -  2371.779:   95.2527%  (      202)
00:19:29.225   2371.779 -  2387.383:   95.4849%  (      175)
00:19:29.225   2387.383 -  2402.987:   95.6759%  (      144)
00:19:29.225   2402.987 -  2418.590:   95.8418%  (      125)
00:19:29.225   2418.590 -  2434.194:   96.0275%  (      140)
00:19:29.225   2434.194 -  2449.798:   96.1775%  (      113)
00:19:29.225   2449.798 -  2465.402:   96.3247%  (      111)
00:19:29.225   2465.402 -  2481.006:   96.4933%  (      127)
00:19:29.225   2481.006 -  2496.610:   96.6246%  (       99)
00:19:29.225   2496.610 -  2512.213:   96.7440%  (       90)
00:19:29.225   2512.213 -  2527.817:   96.8608%  (       88)
00:19:29.225   2527.817 -  2543.421:   96.9656%  (       79)
00:19:29.225   2543.421 -  2559.025:   97.0598%  (       71)
00:19:29.225   2559.025 -  2574.629:   97.1593%  (       75)
00:19:29.225   2574.629 -  2590.232:   97.2601%  (       76)
00:19:29.225   2590.232 -  2605.836:   97.3464%  (       65)
00:19:29.225   2605.836 -  2621.440:   97.4379%  (       69)
00:19:29.225   2621.440 -  2637.044:   97.5189%  (       61)
00:19:29.225   2637.044 -  2652.648:   97.5918%  (       55)
00:19:29.225   2652.648 -  2668.251:   97.6715%  (       60)
00:19:29.225   2668.251 -  2683.855:   97.7484%  (       58)
00:19:29.225   2683.855 -  2699.459:   97.8267%  (       59)
00:19:29.225   2699.459 -  2715.063:   97.8997%  (       55)
00:19:29.225   2715.063 -  2730.667:   97.9726%  (       55)
00:19:29.225   2730.667 -  2746.270:   98.0337%  (       46)
00:19:29.225   2746.270 -  2761.874:   98.0894%  (       42)
00:19:29.225   2761.874 -  2777.478:   98.1465%  (       43)
00:19:29.225   2777.478 -  2793.082:   98.2115%  (       49)
00:19:29.225   2793.082 -  2808.686:   98.2725%  (       46)
00:19:29.225   2808.686 -  2824.290:   98.3349%  (       47)
00:19:29.225   2824.290 -  2839.893:   98.3866%  (       39)
00:19:29.225   2839.893 -  2855.497:   98.4357%  (       37)
00:19:29.225   2855.497 -  2871.101:   98.4848%  (       37)
00:19:29.225   2871.101 -  2886.705:   98.5286%  (       33)
00:19:29.225   2886.705 -  2902.309:   98.5710%  (       32)
00:19:29.225   2902.309 -  2917.912:   98.6122%  (       31)
00:19:29.225   2917.912 -  2933.516:   98.6559%  (       33)
00:19:29.225   2933.516 -  2949.120:   98.6958%  (       30)
00:19:29.225   2949.120 -  2964.724:   98.7356%  (       30)
00:19:29.225   2964.724 -  2980.328:   98.7780%  (       32)
00:19:29.225   2980.328 -  2995.931:   98.8125%  (       26)
00:19:29.225   2995.931 -  3011.535:   98.8457%  (       25)
00:19:29.225   3011.535 -  3027.139:   98.8828%  (       28)
00:19:29.225   3027.139 -  3042.743:   98.9160%  (       25)
00:19:29.225   3042.743 -  3058.347:   98.9465%  (       23)
00:19:29.225   3058.347 -  3073.950:   98.9770%  (       23)
00:19:29.225   3073.950 -  3089.554:   99.0062%  (       22)
00:19:29.225   3089.554 -  3105.158:   99.0381%  (       24)
00:19:29.225   3105.158 -  3120.762:   99.0659%  (       21)
00:19:29.225   3120.762 -  3136.366:   99.0938%  (       21)
00:19:29.225   3136.366 -  3151.970:   99.1230%  (       22)
00:19:29.225   3151.970 -  3167.573:   99.1469%  (       18)
00:19:29.225   3167.573 -  3183.177:   99.1615%  (       11)
00:19:29.225   3183.177 -  3198.781:   99.1800%  (       14)
00:19:29.225   3198.781 -  3214.385:   99.1973%  (       13)
00:19:29.225   3214.385 -  3229.989:   99.2145%  (       13)
00:19:29.225   3229.989 -  3245.592:   99.2331%  (       14)
00:19:29.225   3245.592 -  3261.196:   99.2490%  (       12)
00:19:29.225   3261.196 -  3276.800:   99.2716%  (       17)
00:19:29.225   3276.800 -  3292.404:   99.2862%  (       11)
00:19:29.225   3292.404 -  3308.008:   99.3021%  (       12)
00:19:29.225   3308.008 -  3323.611:   99.3193%  (       13)
00:19:29.225   3323.611 -  3339.215:   99.3406%  (       16)
00:19:29.225   3339.215 -  3354.819:   99.3592%  (       14)
00:19:29.225   3354.819 -  3370.423:   99.3711%  (        9)
00:19:29.225   3370.423 -  3386.027:   99.3857%  (       11)
00:19:29.225   3386.027 -  3401.630:   99.3963%  (        8)
00:19:29.225   3401.630 -  3417.234:   99.4043%  (        6)
00:19:29.225   3417.234 -  3432.838:   99.4175%  (       10)
00:19:29.225   3432.838 -  3448.442:   99.4268%  (        7)
00:19:29.225   3448.442 -  3464.046:   99.4401%  (       10)
00:19:29.225   3464.046 -  3479.650:   99.4507%  (        8)
00:19:29.225   3479.650 -  3495.253:   99.4600%  (        7)
00:19:29.225   3495.253 -  3510.857:   99.4693%  (        7)
00:19:29.225   3510.857 -  3526.461:   99.4839%  (       11)
00:19:29.225   3526.461 -  3542.065:   99.4918%  (        6)
00:19:29.225   3542.065 -  3557.669:   99.5038%  (        9)
00:19:29.225   3557.669 -  3573.272:   99.5170%  (       10)
00:19:29.225   3573.272 -  3588.876:   99.5277%  (        8)
00:19:29.225   3588.876 -  3604.480:   99.5396%  (        9)
00:19:29.225   3604.480 -  3620.084:   99.5529%  (       10)
00:19:29.225   3620.084 -  3635.688:   99.5622%  (        7)
00:19:29.225   3635.688 -  3651.291:   99.5741%  (        9)
00:19:29.225   3651.291 -  3666.895:   99.5807%  (        5)
00:19:29.225   3666.895 -  3682.499:   99.5887%  (        6)
00:19:29.225   3682.499 -  3698.103:   99.5940%  (        4)
00:19:29.225   3698.103 -  3713.707:   99.6033%  (        7)
00:19:29.225   3713.707 -  3729.310:   99.6112%  (        6)
00:19:29.225   3729.310 -  3744.914:   99.6179%  (        5)
00:19:29.225   3744.914 -  3760.518:   99.6219%  (        3)
00:19:29.225   3760.518 -  3776.122:   99.6285%  (        5)
00:19:29.225   3776.122 -  3791.726:   99.6325%  (        3)
00:19:29.225   3791.726 -  3807.330:   99.6404%  (        6)
00:19:29.225   3807.330 -  3822.933:   99.6471%  (        5)
00:19:29.225   3822.933 -  3838.537:   99.6550%  (        6)
00:19:29.225   3838.537 -  3854.141:   99.6617%  (        5)
00:19:29.225   3854.141 -  3869.745:   99.6670%  (        4)
00:19:29.225   3869.745 -  3885.349:   99.6723%  (        4)
00:19:29.225   3885.349 -  3900.952:   99.6776%  (        4)
00:19:29.225   3900.952 -  3916.556:   99.6829%  (        4)
00:19:29.225   3916.556 -  3932.160:   99.6922%  (        7)
00:19:29.225   3932.160 -  3947.764:   99.7015%  (        7)
00:19:29.225   3947.764 -  3963.368:   99.7028%  (        1)
00:19:29.225   3963.368 -  3978.971:   99.7081%  (        4)
00:19:29.225   3978.971 -  3994.575:   99.7134%  (        4)
00:19:29.225   3994.575 -  4025.783:   99.7240%  (        8)
00:19:29.225   4025.783 -  4056.990:   99.7307%  (        5)
00:19:29.225   4056.990 -  4088.198:   99.7360%  (        4)
00:19:29.225   4088.198 -  4119.406:   99.7399%  (        3)
00:19:29.225   4119.406 -  4150.613:   99.7439%  (        3)
00:19:29.225   4150.613 -  4181.821:   99.7479%  (        3)
00:19:29.225   4181.821 -  4213.029:   99.7519%  (        3)
00:19:29.225   4213.029 -  4244.236:   99.7559%  (        3)
00:19:29.225   4244.236 -  4275.444:   99.7585%  (        2)
00:19:29.225   4275.444 -  4306.651:   99.7612%  (        2)
00:19:29.225   4306.651 -  4337.859:   99.7638%  (        2)
00:19:29.225   4337.859 -  4369.067:   99.7678%  (        3)
00:19:29.225   4369.067 -  4400.274:   99.7705%  (        2)
00:19:29.225   4400.274 -  4431.482:   99.7758%  (        4)
00:19:29.225   4431.482 -  4462.690:   99.7771%  (        1)
00:19:29.225   4462.690 -  4493.897:   99.7824%  (        4)
00:19:29.225   4493.897 -  4525.105:   99.7864%  (        3)
00:19:29.225   4525.105 -  4556.312:   99.7930%  (        5)
00:19:29.225   4556.312 -  4587.520:   99.8010%  (        6)
00:19:29.225   4587.520 -  4618.728:   99.8036%  (        2)
00:19:29.225   4618.728 -  4649.935:   99.8103%  (        5)
00:19:29.225   4649.935 -  4681.143:   99.8196%  (        7)
00:19:29.225   4681.143 -  4712.350:   99.8249%  (        4)
00:19:29.225   4712.350 -  4743.558:   99.8275%  (        2)
00:19:29.225   4743.558 -  4774.766:   99.8315%  (        3)
00:19:29.225   4774.766 -  4805.973:   99.8328%  (        1)
00:19:29.225   4805.973 -  4837.181:   99.8368%  (        3)
00:19:29.225   4837.181 -  4868.389:   99.8395%  (        2)
00:19:29.225   4868.389 -  4899.596:   99.8434%  (        3)
00:19:29.225   4899.596 -  4930.804:   99.8461%  (        2)
00:19:29.225   4930.804 -  4962.011:   99.8487%  (        2)
00:19:29.225   4962.011 -  4993.219:   99.8514%  (        2)
00:19:29.225   4993.219 -  5024.427:   99.8541%  (        2)
00:19:29.225   5024.427 -  5055.634:   99.8594%  (        4)
00:19:29.225   5055.634 -  5086.842:   99.8647%  (        4)
00:19:29.225   5086.842 -  5118.050:   99.8660%  (        1)
00:19:29.225   5118.050 -  5149.257:   99.8713%  (        4)
00:19:29.225   5149.257 -  5180.465:   99.8753%  (        3)
00:19:29.225   5180.465 -  5211.672:   99.8793%  (        3)
00:19:29.225   5211.672 -  5242.880:   99.8832%  (        3)
00:19:29.225   5242.880 -  5274.088:   99.8859%  (        2)
00:19:29.225   5274.088 -  5305.295:   99.8899%  (        3)
00:19:29.225   5305.295 -  5336.503:   99.8939%  (        3)
00:19:29.225   5336.503 -  5367.710:   99.8965%  (        2)
00:19:29.225   5367.710 -  5398.918:   99.8992%  (        2)
00:19:29.225   5398.918 -  5430.126:   99.9018%  (        2)
00:19:29.225   5430.126 -  5461.333:   99.9058%  (        3)
00:19:29.225   5461.333 -  5492.541:   99.9098%  (        3)
00:19:29.225   5492.541 -  5523.749:   99.9138%  (        3)
00:19:29.225   5523.749 -  5554.956:   99.9164%  (        2)
00:19:29.225   5554.956 -  5586.164:   99.9191%  (        2)
00:19:29.225   5586.164 -  5617.371:   99.9244%  (        4)
00:19:29.225   5617.371 -  5648.579:   99.9257%  (        1)
00:19:29.225   5648.579 -  5679.787:   99.9297%  (        3)
00:19:29.225   5679.787 -  5710.994:   99.9323%  (        2)
00:19:29.225   5710.994 -  5742.202:   99.9350%  (        2)
00:19:29.225   5742.202 -  5773.410:   99.9363%  (        1)
00:19:29.225   5773.410 -  5804.617:   99.9376%  (        1)
00:19:29.225   5804.617 -  5835.825:   99.9390%  (        1)
00:19:29.225   5835.825 -  5867.032:   99.9403%  (        1)
00:19:29.225   5867.032 -  5898.240:   99.9416%  (        1)
00:19:29.225   5898.240 -  5929.448:   99.9443%  (        2)
00:19:29.225   5929.448 -  5960.655:   99.9456%  (        1)
00:19:29.225   5960.655 -  5991.863:   99.9483%  (        2)
00:19:29.225   5991.863 -  6023.070:   99.9496%  (        1)
00:19:29.225   6023.070 -  6054.278:   99.9509%  (        1)
00:19:29.225   6054.278 -  6085.486:   99.9522%  (        1)
00:19:29.225   6085.486 -  6116.693:   99.9536%  (        1)
00:19:29.225   6116.693 -  6147.901:   99.9549%  (        1)
00:19:29.225   6147.901 -  6179.109:   99.9575%  (        2)
00:19:29.225   6179.109 -  6210.316:   99.9589%  (        1)
00:19:29.225   6210.316 -  6241.524:   99.9602%  (        1)
00:19:29.225   6241.524 -  6272.731:   99.9615%  (        1)
00:19:29.225   6272.731 -  6303.939:   99.9642%  (        2)
00:19:29.225   6303.939 -  6335.147:   99.9655%  (        1)
00:19:29.225   6335.147 -  6366.354:   99.9668%  (        1)
00:19:29.225   6366.354 -  6397.562:   99.9682%  (        1)
00:19:29.225   6397.562 -  6428.770:   99.9708%  (        2)
00:19:29.225   6428.770 -  6459.977:   99.9721%  (        1)
00:19:29.225   6459.977 -  6491.185:   99.9735%  (        1)
00:19:29.225   6491.185 -  6522.392:   99.9748%  (        1)
00:19:29.225   6522.392 -  6553.600:   99.9774%  (        2)
00:19:29.225   6553.600 -  6584.808:   99.9788%  (        1)
00:19:29.225   6584.808 -  6616.015:   99.9801%  (        1)
00:19:29.225   6616.015 -  6647.223:   99.9814%  (        1)
00:19:29.226   6647.223 -  6678.430:   99.9841%  (        2)
00:19:29.226   6678.430 -  6709.638:   99.9854%  (        1)
00:19:29.226   6709.638 -  6740.846:   99.9867%  (        1)
00:19:29.226   6740.846 -  6772.053:   99.9881%  (        1)
00:19:29.226   6772.053 -  6803.261:   99.9907%  (        2)
00:19:29.226   6803.261 -  6834.469:   99.9920%  (        1)
00:19:29.226   6865.676 -  6896.884:   99.9934%  (        1)
00:19:29.226   6959.299 -  6990.507:   99.9947%  (        1)
00:19:29.226   8800.549 -  8862.964:   99.9973%  (        2)
00:19:29.226   8862.964 -  8925.379:   99.9987%  (        1)
00:19:29.226   8987.794 -  9050.210:  100.0000%  (        1)
00:19:29.226  
00:19:29.226   13:56:11 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:19:29.226  
00:19:29.226  real	0m2.812s
00:19:29.226  user	0m2.290s
00:19:29.226  sys	0m0.424s
00:19:29.226   13:56:11 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:29.226   13:56:11 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:19:29.226  ************************************
00:19:29.226  END TEST nvme_perf
00:19:29.226  ************************************
00:19:29.226   13:56:11 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:19:29.226   13:56:11 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:19:29.226   13:56:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:29.226   13:56:11 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:29.226  ************************************
00:19:29.226  START TEST nvme_hello_world
00:19:29.226  ************************************
00:19:29.226   13:56:11 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:19:29.485  Initializing NVMe Controllers
00:19:29.485  Attached to 0000:00:10.0
00:19:29.485    Namespace ID: 1 size: 5GB
00:19:29.485  Initialization complete.
00:19:29.485  INFO: using host memory buffer for IO
00:19:29.485  Hello world!
00:19:29.485  ************************************
00:19:29.485  END TEST nvme_hello_world
00:19:29.485  ************************************
00:19:29.485  
00:19:29.485  real	0m0.411s
00:19:29.485  user	0m0.152s
00:19:29.485  sys	0m0.215s
00:19:29.485   13:56:12 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:29.485   13:56:12 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:19:29.744   13:56:12 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:19:29.744   13:56:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:29.744   13:56:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:29.744   13:56:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:29.744  ************************************
00:19:29.744  START TEST nvme_sgl
00:19:29.744  ************************************
00:19:29.744   13:56:12 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:19:30.003  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:19:30.003  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:19:30.003  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:19:30.003  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:19:30.003  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:19:30.003  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:19:30.003  NVMe Readv/Writev Request test
00:19:30.003  Attached to 0000:00:10.0
00:19:30.003  0000:00:10.0: build_io_request_2 test passed
00:19:30.003  0000:00:10.0: build_io_request_4 test passed
00:19:30.003  0000:00:10.0: build_io_request_5 test passed
00:19:30.003  0000:00:10.0: build_io_request_6 test passed
00:19:30.003  0000:00:10.0: build_io_request_7 test passed
00:19:30.003  0000:00:10.0: build_io_request_10 test passed
00:19:30.003  Cleaning up...
00:19:30.003  ************************************
00:19:30.003  END TEST nvme_sgl
00:19:30.003  ************************************
00:19:30.003  
00:19:30.003  real	0m0.405s
00:19:30.003  user	0m0.178s
00:19:30.003  sys	0m0.176s
00:19:30.003   13:56:12 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:30.003   13:56:12 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:19:30.003   13:56:12 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:19:30.003   13:56:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:30.003   13:56:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:30.003   13:56:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:30.003  ************************************
00:19:30.003  START TEST nvme_e2edp
00:19:30.003  ************************************
00:19:30.003   13:56:12 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:19:30.572  NVMe Write/Read with End-to-End data protection test
00:19:30.572  Attached to 0000:00:10.0
00:19:30.572  Cleaning up...
00:19:30.572  ************************************
00:19:30.572  END TEST nvme_e2edp
00:19:30.572  ************************************
00:19:30.572  
00:19:30.572  real	0m0.398s
00:19:30.572  user	0m0.145s
00:19:30.572  sys	0m0.207s
00:19:30.572   13:56:13 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:30.572   13:56:13 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:19:30.572   13:56:13 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:19:30.572   13:56:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:30.572   13:56:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:30.572   13:56:13 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:30.572  ************************************
00:19:30.572  START TEST nvme_reserve
00:19:30.572  ************************************
00:19:30.572   13:56:13 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:19:30.830  =====================================================
00:19:30.830  NVMe Controller at PCI bus 0, device 16, function 0
00:19:30.830  =====================================================
00:19:30.830  Reservations:                Not Supported
00:19:30.830  Reservation test passed
00:19:31.089  ************************************
00:19:31.089  END TEST nvme_reserve
00:19:31.089  ************************************
00:19:31.089  
00:19:31.089  real	0m0.413s
00:19:31.089  user	0m0.152s
00:19:31.089  sys	0m0.205s
00:19:31.089   13:56:13 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:31.089   13:56:13 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:19:31.089   13:56:13 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:19:31.089   13:56:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:31.089   13:56:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:31.089   13:56:13 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:31.089  ************************************
00:19:31.089  START TEST nvme_err_injection
00:19:31.089  ************************************
00:19:31.089   13:56:13 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:19:31.348  NVMe Error Injection test
00:19:31.348  Attached to 0000:00:10.0
00:19:31.348  0000:00:10.0: get features failed as expected
00:19:31.348  0000:00:10.0: get features successfully as expected
00:19:31.348  0000:00:10.0: read failed as expected
00:19:31.348  0000:00:10.0: read successfully as expected
00:19:31.348  Cleaning up...
00:19:31.348  ************************************
00:19:31.348  END TEST nvme_err_injection
00:19:31.348  ************************************
00:19:31.348  
00:19:31.348  real	0m0.401s
00:19:31.348  user	0m0.150s
00:19:31.348  sys	0m0.202s
00:19:31.348   13:56:14 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:31.348   13:56:14 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:19:31.607   13:56:14 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:19:31.607   13:56:14 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:19:31.607   13:56:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:31.607   13:56:14 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:31.607  ************************************
00:19:31.607  START TEST nvme_overhead
00:19:31.607  ************************************
00:19:31.607   13:56:14 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:19:32.986  Initializing NVMe Controllers
00:19:32.986  Attached to 0000:00:10.0
00:19:32.986  Initialization complete. Launching workers.
00:19:32.986  submit (in ns)   avg, min, max =  15347.0,  12701.0, 105763.8
00:19:32.986  complete (in ns) avg, min, max =  10389.0,   8561.0, 161610.5
00:19:32.986  
00:19:32.986  Submit histogram
00:19:32.986  ================
00:19:32.986         Range in us     Cumulative     Count
00:19:32.986     12.678 -    12.739:    0.0226%  (        2)
00:19:32.986     12.983 -    13.044:    0.1580%  (       12)
00:19:32.986     13.044 -    13.105:    0.4967%  (       30)
00:19:32.986     13.105 -    13.166:    1.1854%  (       61)
00:19:32.986     13.166 -    13.227:    2.5852%  (      124)
00:19:32.986     13.227 -    13.288:    4.5947%  (      178)
00:19:32.986     13.288 -    13.349:    7.0332%  (      216)
00:19:32.986     13.349 -    13.410:    9.3814%  (      208)
00:19:32.986     13.410 -    13.470:   11.5602%  (      193)
00:19:32.986     13.470 -    13.531:   14.0777%  (      223)
00:19:32.986     13.531 -    13.592:   16.5274%  (      217)
00:19:32.986     13.592 -    13.653:   18.8869%  (      209)
00:19:32.986     13.653 -    13.714:   20.8625%  (      175)
00:19:32.986     13.714 -    13.775:   22.7591%  (      168)
00:19:32.986     13.775 -    13.836:   24.3847%  (      144)
00:19:32.986     13.836 -    13.897:   25.9427%  (      138)
00:19:32.986     13.897 -    13.958:   27.5570%  (      143)
00:19:32.986     13.958 -    14.019:   29.7471%  (      194)
00:19:32.986     14.019 -    14.080:   32.0050%  (      200)
00:19:32.986     14.080 -    14.141:   34.7483%  (      243)
00:19:32.986     14.141 -    14.202:   37.6947%  (      261)
00:19:32.986     14.202 -    14.263:   41.7814%  (      362)
00:19:32.986     14.263 -    14.324:   46.4552%  (      414)
00:19:32.986     14.324 -    14.385:   51.8063%  (      474)
00:19:32.986     14.385 -    14.446:   57.1348%  (      472)
00:19:32.986     14.446 -    14.507:   61.7182%  (      406)
00:19:32.986     14.507 -    14.568:   65.8839%  (      369)
00:19:32.986     14.568 -    14.629:   68.9320%  (      270)
00:19:32.986     14.629 -    14.690:   71.6866%  (      244)
00:19:32.986     14.690 -    14.750:   73.6961%  (      178)
00:19:32.986     14.750 -    14.811:   75.0847%  (      123)
00:19:32.986     14.811 -    14.872:   76.3604%  (      113)
00:19:32.986     14.872 -    14.933:   77.3764%  (       90)
00:19:32.986     14.933 -    14.994:   78.0537%  (       60)
00:19:32.986     14.994 -    15.055:   78.3924%  (       30)
00:19:32.986     15.055 -    15.116:   78.7537%  (       32)
00:19:32.986     15.116 -    15.177:   79.0811%  (       29)
00:19:32.986     15.177 -    15.238:   79.2165%  (       12)
00:19:32.986     15.238 -    15.299:   79.4197%  (       18)
00:19:32.986     15.299 -    15.360:   79.5100%  (        8)
00:19:32.986     15.360 -    15.421:   79.6117%  (        9)
00:19:32.986     15.421 -    15.482:   79.6681%  (        5)
00:19:32.986     15.482 -    15.543:   79.6907%  (        2)
00:19:32.986     15.543 -    15.604:   79.7020%  (        1)
00:19:32.986     15.604 -    15.726:   79.7471%  (        4)
00:19:32.986     15.726 -    15.848:   79.7697%  (        2)
00:19:32.986     15.848 -    15.970:   79.8036%  (        3)
00:19:32.986     15.970 -    16.091:   79.8149%  (        1)
00:19:32.986     16.213 -    16.335:   79.8261%  (        1)
00:19:32.986     16.457 -    16.579:   79.8374%  (        1)
00:19:32.986     16.579 -    16.701:   79.8487%  (        1)
00:19:32.986     16.823 -    16.945:   79.8600%  (        1)
00:19:32.986     16.945 -    17.067:   79.8713%  (        1)
00:19:32.986     17.067 -    17.189:   79.8939%  (        2)
00:19:32.986     17.189 -    17.310:   79.9052%  (        1)
00:19:32.986     17.310 -    17.432:   79.9165%  (        1)
00:19:32.986     17.432 -    17.554:   79.9277%  (        1)
00:19:32.986     17.554 -    17.676:   79.9616%  (        3)
00:19:32.986     17.676 -    17.798:   80.0294%  (        6)
00:19:32.986     17.798 -    17.920:   80.1197%  (        8)
00:19:32.986     17.920 -    18.042:   80.1987%  (        7)
00:19:32.986     18.042 -    18.164:   80.4132%  (       19)
00:19:32.986     18.164 -    18.286:   80.7857%  (       33)
00:19:32.986     18.286 -    18.408:   81.5308%  (       66)
00:19:32.986     18.408 -    18.530:   82.4565%  (       82)
00:19:32.986     18.530 -    18.651:   83.2694%  (       72)
00:19:32.986     18.651 -    18.773:   83.9693%  (       62)
00:19:32.986     18.773 -    18.895:   85.1998%  (      109)
00:19:32.986     18.895 -    19.017:   87.7173%  (      223)
00:19:32.986     19.017 -    19.139:   89.9187%  (      195)
00:19:32.986     19.139 -    19.261:   91.7702%  (      164)
00:19:32.986     19.261 -    19.383:   93.1813%  (      125)
00:19:32.986     19.383 -    19.505:   94.2086%  (       91)
00:19:32.986     19.505 -    19.627:   94.8070%  (       53)
00:19:32.986     19.627 -    19.749:   95.2134%  (       36)
00:19:32.986     19.749 -    19.870:   95.6424%  (       38)
00:19:32.986     19.870 -    19.992:   96.0375%  (       35)
00:19:32.986     19.992 -    20.114:   96.1617%  (       11)
00:19:32.986     20.114 -    20.236:   96.3649%  (       18)
00:19:32.986     20.236 -    20.358:   96.4552%  (        8)
00:19:32.986     20.358 -    20.480:   96.5568%  (        9)
00:19:32.986     20.480 -    20.602:   96.6358%  (        7)
00:19:32.986     20.602 -    20.724:   96.6923%  (        5)
00:19:32.986     20.724 -    20.846:   96.7374%  (        4)
00:19:32.986     20.846 -    20.968:   96.8390%  (        9)
00:19:32.986     20.968 -    21.090:   96.9632%  (       11)
00:19:32.986     21.090 -    21.211:   97.0309%  (        6)
00:19:32.986     21.211 -    21.333:   97.0874%  (        5)
00:19:32.986     21.333 -    21.455:   97.2116%  (       11)
00:19:32.986     21.455 -    21.577:   97.2793%  (        6)
00:19:32.986     21.577 -    21.699:   97.3696%  (        8)
00:19:32.986     21.699 -    21.821:   97.4261%  (        5)
00:19:32.986     21.821 -    21.943:   97.4712%  (        4)
00:19:32.986     21.943 -    22.065:   97.5051%  (        3)
00:19:32.986     22.065 -    22.187:   97.5389%  (        3)
00:19:32.986     22.187 -    22.309:   97.5954%  (        5)
00:19:32.986     22.309 -    22.430:   97.6406%  (        4)
00:19:32.986     22.430 -    22.552:   97.6518%  (        1)
00:19:32.986     22.552 -    22.674:   97.7309%  (        7)
00:19:32.986     22.674 -    22.796:   97.7760%  (        4)
00:19:32.986     22.796 -    22.918:   97.8212%  (        4)
00:19:32.986     22.918 -    23.040:   97.8325%  (        1)
00:19:32.986     23.162 -    23.284:   97.8438%  (        1)
00:19:32.986     23.284 -    23.406:   97.9002%  (        5)
00:19:32.986     23.528 -    23.650:   97.9228%  (        2)
00:19:32.986     23.650 -    23.771:   97.9341%  (        1)
00:19:32.986     23.771 -    23.893:   97.9905%  (        5)
00:19:32.986     23.893 -    24.015:   98.0244%  (        3)
00:19:32.986     24.015 -    24.137:   98.0470%  (        2)
00:19:32.986     24.137 -    24.259:   98.0583%  (        1)
00:19:32.987     24.259 -    24.381:   98.0921%  (        3)
00:19:32.987     24.381 -    24.503:   98.1147%  (        2)
00:19:32.987     24.503 -    24.625:   98.1260%  (        1)
00:19:32.987     24.625 -    24.747:   98.2163%  (        8)
00:19:32.987     24.747 -    24.869:   98.2389%  (        2)
00:19:32.987     24.869 -    24.990:   98.3179%  (        7)
00:19:32.987     24.990 -    25.112:   98.3744%  (        5)
00:19:32.987     25.112 -    25.234:   98.4534%  (        7)
00:19:32.987     25.234 -    25.356:   98.5437%  (        8)
00:19:32.987     25.356 -    25.478:   98.6566%  (       10)
00:19:32.987     25.478 -    25.600:   98.7243%  (        6)
00:19:32.987     25.600 -    25.722:   98.7921%  (        6)
00:19:32.987     25.722 -    25.844:   98.8937%  (        9)
00:19:32.987     25.844 -    25.966:   98.9388%  (        4)
00:19:32.987     25.966 -    26.088:   99.0743%  (       12)
00:19:32.987     26.088 -    26.210:   99.1082%  (        3)
00:19:32.987     26.210 -    26.331:   99.1420%  (        3)
00:19:32.987     26.331 -    26.453:   99.1533%  (        1)
00:19:32.987     26.575 -    26.697:   99.1759%  (        2)
00:19:32.987     26.697 -    26.819:   99.2098%  (        3)
00:19:32.987     26.819 -    26.941:   99.2323%  (        2)
00:19:32.987     26.941 -    27.063:   99.2662%  (        3)
00:19:32.987     27.063 -    27.185:   99.2775%  (        1)
00:19:32.987     27.185 -    27.307:   99.2888%  (        1)
00:19:32.987     27.429 -    27.550:   99.3114%  (        2)
00:19:32.987     27.550 -    27.672:   99.3339%  (        2)
00:19:32.987     27.672 -    27.794:   99.3791%  (        4)
00:19:32.987     27.794 -    27.916:   99.3904%  (        1)
00:19:32.987     27.916 -    28.038:   99.4242%  (        3)
00:19:32.987     28.038 -    28.160:   99.4694%  (        4)
00:19:32.987     28.282 -    28.404:   99.5033%  (        3)
00:19:32.987     28.404 -    28.526:   99.5371%  (        3)
00:19:32.987     28.526 -    28.648:   99.5484%  (        1)
00:19:32.987     28.770 -    28.891:   99.5823%  (        3)
00:19:32.987     29.013 -    29.135:   99.5936%  (        1)
00:19:32.987     29.257 -    29.379:   99.6049%  (        1)
00:19:32.987     29.379 -    29.501:   99.6275%  (        2)
00:19:32.987     29.501 -    29.623:   99.6613%  (        3)
00:19:32.987     29.623 -    29.745:   99.6726%  (        1)
00:19:32.987     29.745 -    29.867:   99.7065%  (        3)
00:19:32.987     29.867 -    29.989:   99.7291%  (        2)
00:19:32.987     30.354 -    30.476:   99.7403%  (        1)
00:19:32.987     30.476 -    30.598:   99.7629%  (        2)
00:19:32.987     30.598 -    30.720:   99.7742%  (        1)
00:19:32.987     30.720 -    30.842:   99.7855%  (        1)
00:19:32.987     30.964 -    31.086:   99.7968%  (        1)
00:19:32.987     31.086 -    31.208:   99.8081%  (        1)
00:19:32.987     31.451 -    31.695:   99.8307%  (        2)
00:19:32.987     31.695 -    31.939:   99.8532%  (        2)
00:19:32.987     32.183 -    32.427:   99.8645%  (        1)
00:19:32.987     33.402 -    33.646:   99.8758%  (        1)
00:19:32.987     33.646 -    33.890:   99.8871%  (        1)
00:19:32.987     33.890 -    34.133:   99.8984%  (        1)
00:19:32.987     38.034 -    38.278:   99.9097%  (        1)
00:19:32.987     38.522 -    38.766:   99.9210%  (        1)
00:19:32.987     38.766 -    39.010:   99.9323%  (        1)
00:19:32.987     50.712 -    50.956:   99.9436%  (        1)
00:19:32.987     72.655 -    73.143:   99.9548%  (        1)
00:19:32.987     76.069 -    76.556:   99.9661%  (        1)
00:19:32.987     78.507 -    78.994:   99.9774%  (        1)
00:19:32.987     94.110 -    94.598:   99.9887%  (        1)
00:19:32.987    105.326 -   105.813:  100.0000%  (        1)
00:19:32.987  
00:19:32.987  Complete histogram
00:19:32.987  ==================
00:19:32.987         Range in us     Cumulative     Count
00:19:32.987      8.533 -     8.594:    0.0113%  (        1)
00:19:32.987      8.594 -     8.655:    0.2709%  (       23)
00:19:32.987      8.655 -     8.716:    1.5015%  (      109)
00:19:32.987      8.716 -     8.777:    3.1723%  (      148)
00:19:32.987      8.777 -     8.838:    4.6850%  (      134)
00:19:32.987      8.838 -     8.899:    5.8591%  (      104)
00:19:32.987      8.899 -     8.960:    7.0558%  (      106)
00:19:32.987      8.960 -     9.021:    8.2073%  (      102)
00:19:32.987      9.021 -     9.082:    9.2685%  (       94)
00:19:32.987      9.082 -     9.143:   10.5893%  (      117)
00:19:32.987      9.143 -     9.204:   13.0503%  (      218)
00:19:32.987      9.204 -     9.265:   18.3450%  (      469)
00:19:32.987      9.265 -     9.326:   25.4346%  (      628)
00:19:32.987      9.326 -     9.387:   33.4951%  (      714)
00:19:32.987      9.387 -     9.448:   40.4606%  (      617)
00:19:32.987      9.448 -     9.509:   46.4552%  (      531)
00:19:32.987      9.509 -     9.570:   51.4676%  (      444)
00:19:32.987      9.570 -     9.630:   55.6672%  (      372)
00:19:32.987      9.630 -     9.691:   59.5733%  (      346)
00:19:32.987      9.691 -     9.752:   62.3617%  (      247)
00:19:32.987      9.752 -     9.813:   65.3082%  (      261)
00:19:32.987      9.813 -     9.874:   67.9386%  (      233)
00:19:32.987      9.874 -     9.935:   69.8578%  (      170)
00:19:32.987      9.935 -     9.996:   72.0930%  (      198)
00:19:32.987      9.996 -    10.057:   74.0686%  (      175)
00:19:32.987     10.057 -    10.118:   75.5927%  (      135)
00:19:32.987     10.118 -    10.179:   76.7555%  (      103)
00:19:32.987     10.179 -    10.240:   77.6473%  (       79)
00:19:32.987     10.240 -    10.301:   78.3811%  (       65)
00:19:32.987     10.301 -    10.362:   79.0585%  (       60)
00:19:32.987     10.362 -    10.423:   79.5439%  (       43)
00:19:32.987     10.423 -    10.484:   79.8261%  (       25)
00:19:32.987     10.484 -    10.545:   80.0294%  (       18)
00:19:32.987     10.545 -    10.606:   80.2326%  (       18)
00:19:32.987     10.606 -    10.667:   80.4019%  (       15)
00:19:32.987     10.667 -    10.728:   80.5035%  (        9)
00:19:32.987     10.728 -    10.789:   80.6164%  (       10)
00:19:32.987     10.789 -    10.850:   80.7293%  (       10)
00:19:32.987     10.850 -    10.910:   80.7970%  (        6)
00:19:32.987     10.910 -    10.971:   80.8535%  (        5)
00:19:32.987     10.971 -    11.032:   80.9099%  (        5)
00:19:32.987     11.032 -    11.093:   80.9551%  (        4)
00:19:32.987     11.093 -    11.154:   80.9889%  (        3)
00:19:32.987     11.154 -    11.215:   81.0228%  (        3)
00:19:32.987     11.215 -    11.276:   81.0341%  (        1)
00:19:32.987     11.337 -    11.398:   81.0454%  (        1)
00:19:32.987     11.520 -    11.581:   81.0793%  (        3)
00:19:32.987     12.190 -    12.251:   81.0905%  (        1)
00:19:32.987     12.373 -    12.434:   81.1018%  (        1)
00:19:32.987     12.434 -    12.495:   81.2486%  (       13)
00:19:32.987     12.495 -    12.556:   81.7566%  (       45)
00:19:32.987     12.556 -    12.617:   83.0097%  (      111)
00:19:32.987     12.617 -    12.678:   84.2741%  (      112)
00:19:32.987     12.678 -    12.739:   85.8546%  (      140)
00:19:32.987     12.739 -    12.800:   86.8142%  (       85)
00:19:32.987     12.800 -    12.861:   87.7738%  (       85)
00:19:32.987     12.861 -    12.922:   88.6205%  (       75)
00:19:32.987     12.922 -    12.983:   89.3994%  (       69)
00:19:32.987     12.983 -    13.044:   90.3138%  (       81)
00:19:32.987     13.044 -    13.105:   91.1831%  (       77)
00:19:32.987     13.105 -    13.166:   92.0750%  (       79)
00:19:32.987     13.166 -    13.227:   92.9104%  (       74)
00:19:32.987     13.227 -    13.288:   93.6667%  (       67)
00:19:32.987     13.288 -    13.349:   94.2651%  (       53)
00:19:32.987     13.349 -    13.410:   94.7505%  (       43)
00:19:32.987     13.410 -    13.470:   95.0440%  (       26)
00:19:32.987     13.470 -    13.531:   95.4279%  (       34)
00:19:32.987     13.531 -    13.592:   95.6649%  (       21)
00:19:32.987     13.592 -    13.653:   95.7891%  (       11)
00:19:32.987     13.653 -    13.714:   95.8681%  (        7)
00:19:32.987     13.714 -    13.775:   96.0036%  (       12)
00:19:32.987     13.775 -    13.836:   96.0939%  (        8)
00:19:32.987     13.836 -    13.897:   96.1391%  (        4)
00:19:32.987     13.897 -    13.958:   96.2068%  (        6)
00:19:32.987     13.958 -    14.019:   96.2520%  (        4)
00:19:32.987     14.019 -    14.080:   96.2858%  (        3)
00:19:32.987     14.080 -    14.141:   96.4213%  (       12)
00:19:32.987     14.141 -    14.202:   96.5116%  (        8)
00:19:32.987     14.202 -    14.263:   96.5794%  (        6)
00:19:32.987     14.263 -    14.324:   96.6019%  (        2)
00:19:32.987     14.324 -    14.385:   96.6584%  (        5)
00:19:32.987     14.385 -    14.446:   96.6697%  (        1)
00:19:32.987     14.446 -    14.507:   96.7035%  (        3)
00:19:32.987     14.507 -    14.568:   96.7148%  (        1)
00:19:32.987     14.568 -    14.629:   96.7374%  (        2)
00:19:32.987     14.629 -    14.690:   96.7713%  (        3)
00:19:32.987     14.750 -    14.811:   96.7826%  (        1)
00:19:32.987     14.811 -    14.872:   96.7939%  (        1)
00:19:32.987     14.872 -    14.933:   96.8051%  (        1)
00:19:32.987     14.933 -    14.994:   96.8164%  (        1)
00:19:32.987     14.994 -    15.055:   96.8390%  (        2)
00:19:32.987     15.055 -    15.116:   96.8729%  (        3)
00:19:32.987     15.116 -    15.177:   96.8955%  (        2)
00:19:32.987     15.177 -    15.238:   96.9180%  (        2)
00:19:32.987     15.238 -    15.299:   96.9293%  (        1)
00:19:32.987     15.299 -    15.360:   96.9519%  (        2)
00:19:32.987     15.360 -    15.421:   96.9632%  (        1)
00:19:32.987     15.421 -    15.482:   96.9971%  (        3)
00:19:32.987     15.482 -    15.543:   97.0196%  (        2)
00:19:32.987     15.543 -    15.604:   97.0422%  (        2)
00:19:32.987     15.604 -    15.726:   97.0761%  (        3)
00:19:32.987     15.726 -    15.848:   97.1100%  (        3)
00:19:32.987     15.848 -    15.970:   97.2116%  (        9)
00:19:32.987     15.970 -    16.091:   97.2906%  (        7)
00:19:32.987     16.091 -    16.213:   97.4035%  (       10)
00:19:32.987     16.213 -    16.335:   97.4712%  (        6)
00:19:32.987     16.335 -    16.457:   97.5728%  (        9)
00:19:32.987     16.457 -    16.579:   97.6631%  (        8)
00:19:32.987     16.579 -    16.701:   97.8325%  (       15)
00:19:32.987     16.701 -    16.823:   97.9341%  (        9)
00:19:32.987     16.823 -    16.945:   97.9792%  (        4)
00:19:32.987     16.945 -    17.067:   98.0018%  (        2)
00:19:32.987     17.067 -    17.189:   98.0695%  (        6)
00:19:32.987     17.189 -    17.310:   98.1373%  (        6)
00:19:32.987     17.310 -    17.432:   98.2050%  (        6)
00:19:32.987     17.432 -    17.554:   98.2615%  (        5)
00:19:32.987     17.554 -    17.676:   98.2953%  (        3)
00:19:32.987     17.676 -    17.798:   98.3405%  (        4)
00:19:32.987     17.798 -    17.920:   98.3631%  (        2)
00:19:32.987     17.920 -    18.042:   98.3969%  (        3)
00:19:32.987     18.042 -    18.164:   98.4195%  (        2)
00:19:32.987     18.164 -    18.286:   98.4308%  (        1)
00:19:32.987     18.286 -    18.408:   98.4534%  (        2)
00:19:32.987     18.651 -    18.773:   98.4985%  (        4)
00:19:32.987     19.017 -    19.139:   98.5098%  (        1)
00:19:32.987     19.139 -    19.261:   98.5211%  (        1)
00:19:32.987     19.261 -    19.383:   98.5324%  (        1)
00:19:32.987     19.505 -    19.627:   98.5437%  (        1)
00:19:32.988     19.627 -    19.749:   98.5663%  (        2)
00:19:32.988     19.749 -    19.870:   98.5776%  (        1)
00:19:32.988     20.114 -    20.236:   98.5888%  (        1)
00:19:32.988     20.236 -    20.358:   98.6114%  (        2)
00:19:32.988     20.358 -    20.480:   98.6340%  (        2)
00:19:32.988     20.480 -    20.602:   98.7356%  (        9)
00:19:32.988     20.602 -    20.724:   98.8259%  (        8)
00:19:32.988     20.724 -    20.846:   98.9049%  (        7)
00:19:32.988     20.846 -    20.968:   98.9614%  (        5)
00:19:32.988     20.968 -    21.090:   98.9840%  (        2)
00:19:32.988     21.090 -    21.211:   99.0856%  (        9)
00:19:32.988     21.211 -    21.333:   99.1420%  (        5)
00:19:32.988     21.333 -    21.455:   99.1985%  (        5)
00:19:32.988     21.455 -    21.577:   99.2662%  (        6)
00:19:32.988     21.577 -    21.699:   99.3114%  (        4)
00:19:32.988     21.699 -    21.821:   99.3565%  (        4)
00:19:32.988     21.821 -    21.943:   99.4017%  (        4)
00:19:32.988     21.943 -    22.065:   99.4130%  (        1)
00:19:32.988     22.065 -    22.187:   99.4581%  (        4)
00:19:32.988     22.187 -    22.309:   99.5146%  (        5)
00:19:32.988     22.309 -    22.430:   99.5371%  (        2)
00:19:32.988     22.430 -    22.552:   99.5710%  (        3)
00:19:32.988     22.552 -    22.674:   99.6049%  (        3)
00:19:32.988     22.674 -    22.796:   99.6162%  (        1)
00:19:32.988     22.796 -    22.918:   99.6275%  (        1)
00:19:32.988     22.918 -    23.040:   99.6613%  (        3)
00:19:32.988     23.040 -    23.162:   99.6726%  (        1)
00:19:32.988     23.528 -    23.650:   99.6839%  (        1)
00:19:32.988     23.650 -    23.771:   99.6952%  (        1)
00:19:32.988     24.015 -    24.137:   99.7065%  (        1)
00:19:32.988     24.137 -    24.259:   99.7178%  (        1)
00:19:32.988     24.259 -    24.381:   99.7403%  (        2)
00:19:32.988     24.381 -    24.503:   99.7516%  (        1)
00:19:32.988     24.503 -    24.625:   99.7742%  (        2)
00:19:32.988     24.990 -    25.112:   99.7855%  (        1)
00:19:32.988     25.600 -    25.722:   99.7968%  (        1)
00:19:32.988     25.722 -    25.844:   99.8081%  (        1)
00:19:32.988     26.088 -    26.210:   99.8194%  (        1)
00:19:32.988     26.453 -    26.575:   99.8307%  (        1)
00:19:32.988     26.575 -    26.697:   99.8532%  (        2)
00:19:32.988     26.697 -    26.819:   99.8758%  (        2)
00:19:32.988     26.819 -    26.941:   99.8871%  (        1)
00:19:32.988     27.307 -    27.429:   99.8984%  (        1)
00:19:32.988     27.429 -    27.550:   99.9097%  (        1)
00:19:32.988     28.770 -    28.891:   99.9210%  (        1)
00:19:32.988     30.964 -    31.086:   99.9323%  (        1)
00:19:32.988     31.208 -    31.451:   99.9436%  (        1)
00:19:32.988     33.890 -    34.133:   99.9548%  (        1)
00:19:32.988     39.741 -    39.985:   99.9661%  (        1)
00:19:32.988     43.886 -    44.130:   99.9774%  (        1)
00:19:32.988     50.469 -    50.712:   99.9887%  (        1)
00:19:32.988    160.914 -   161.890:  100.0000%  (        1)
00:19:32.988  
00:19:32.988  ************************************
00:19:32.988  END TEST nvme_overhead
00:19:32.988  ************************************
00:19:32.988  
00:19:32.988  real	0m1.399s
00:19:32.988  user	0m1.151s
00:19:32.988  sys	0m0.201s
00:19:32.988   13:56:15 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:32.988   13:56:15 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:19:32.988   13:56:15 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:19:32.988   13:56:15 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:19:32.988   13:56:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:32.988   13:56:15 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:32.988  ************************************
00:19:32.988  START TEST nvme_arbitration
00:19:32.988  ************************************
00:19:32.988   13:56:15 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:19:36.301  Initializing NVMe Controllers
00:19:36.301  Attached to 0000:00:10.0
00:19:36.301  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:19:36.301  Associating QEMU NVMe Ctrl       (12340               ) with lcore 1
00:19:36.301  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:19:36.301  Associating QEMU NVMe Ctrl       (12340               ) with lcore 3
00:19:36.301  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:19:36.301  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:19:36.301  Initialization complete. Launching workers.
00:19:36.301  Starting thread on core 1 with urgent priority queue
00:19:36.301  Starting thread on core 2 with urgent priority queue
00:19:36.301  Starting thread on core 3 with urgent priority queue
00:19:36.301  Starting thread on core 0 with urgent priority queue
00:19:36.301  QEMU NVMe Ctrl       (12340               ) core 0:   981.33 IO/s   101.90 secs/100000 ios
00:19:36.301  QEMU NVMe Ctrl       (12340               ) core 1:  1066.67 IO/s    93.75 secs/100000 ios
00:19:36.301  QEMU NVMe Ctrl       (12340               ) core 2:   490.67 IO/s   203.80 secs/100000 ios
00:19:36.301  QEMU NVMe Ctrl       (12340               ) core 3:   533.33 IO/s   187.50 secs/100000 ios
00:19:36.301  ========================================================
00:19:36.301  
00:19:36.561  ************************************
00:19:36.561  END TEST nvme_arbitration
00:19:36.561  ************************************
00:19:36.561  
00:19:36.561  real	0m3.487s
00:19:36.561  user	0m9.389s
00:19:36.561  sys	0m0.218s
00:19:36.561   13:56:19 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:36.561   13:56:19 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:19:36.561   13:56:19 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:19:36.561   13:56:19 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:19:36.561   13:56:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:36.561   13:56:19 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:36.561  ************************************
00:19:36.561  START TEST nvme_single_aen
00:19:36.561  ************************************
00:19:36.561   13:56:19 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:19:36.820  Asynchronous Event Request test
00:19:36.820  Attached to 0000:00:10.0
00:19:36.820  Reset controller to setup AER completions for this process
00:19:36.820  Registering asynchronous event callbacks...
00:19:36.820  Getting orig temperature thresholds of all controllers
00:19:36.820  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:19:36.820  Setting all controllers temperature threshold low to trigger AER
00:19:36.820  Waiting for all controllers temperature threshold to be set lower
00:19:36.820  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:19:36.820  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:19:36.820  Waiting for all controllers to trigger AER and reset threshold
00:19:36.820  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:19:36.820  Cleaning up...
00:19:36.820  ************************************
00:19:36.820  END TEST nvme_single_aen
00:19:36.820  ************************************
00:19:36.820  
00:19:36.820  real	0m0.325s
00:19:36.820  user	0m0.113s
00:19:36.820  sys	0m0.172s
00:19:36.820   13:56:19 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:36.820   13:56:19 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:19:36.820   13:56:19 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:19:36.820   13:56:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:36.820   13:56:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:36.820   13:56:19 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:36.820  ************************************
00:19:36.820  START TEST nvme_doorbell_aers
00:19:36.820  ************************************
00:19:36.820   13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:19:36.820   13:56:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:19:36.820   13:56:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:19:36.820   13:56:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:19:36.820    13:56:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:19:36.820    13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:19:36.820    13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:19:36.820    13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:19:36.821     13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:19:36.821     13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:19:36.821    13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:19:36.821    13:56:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:19:36.821   13:56:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:19:36.821   13:56:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:19:37.389  [2024-12-11 13:56:19.972504] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82314) is not found. Dropping the request.
00:19:47.371  Executing: test_write_invalid_db
00:19:47.371  Waiting for AER completion...
00:19:47.371  Failure: test_write_invalid_db
00:19:47.371  
00:19:47.371  Executing: test_invalid_db_write_overflow_sq
00:19:47.371  Waiting for AER completion...
00:19:47.371  Failure: test_invalid_db_write_overflow_sq
00:19:47.371  
00:19:47.371  Executing: test_invalid_db_write_overflow_cq
00:19:47.371  Waiting for AER completion...
00:19:47.371  Failure: test_invalid_db_write_overflow_cq
00:19:47.371  
00:19:47.371  ************************************
00:19:47.371  END TEST nvme_doorbell_aers
00:19:47.371  ************************************
00:19:47.371  
00:19:47.371  real	0m10.111s
00:19:47.371  user	0m7.445s
00:19:47.371  sys	0m2.613s
00:19:47.371   13:56:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:47.371   13:56:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:19:47.371    13:56:29 nvme -- nvme/nvme.sh@97 -- # uname
00:19:47.371   13:56:29 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:19:47.371   13:56:29 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:19:47.371   13:56:29 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:19:47.371   13:56:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:47.371   13:56:29 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:47.371  ************************************
00:19:47.371  START TEST nvme_multi_aen
00:19:47.371  ************************************
00:19:47.371   13:56:29 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:19:47.372  [2024-12-11 13:56:30.038808] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82314) is not found. Dropping the request.
00:19:47.372  [2024-12-11 13:56:30.038948] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82314) is not found. Dropping the request.
00:19:47.372  [2024-12-11 13:56:30.038987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82314) is not found. Dropping the request.
00:19:47.372  Child process pid: 82486
00:19:47.631  [Child] Asynchronous Event Request test
00:19:47.631  [Child] Attached to 0000:00:10.0
00:19:47.631  [Child] Registering asynchronous event callbacks...
00:19:47.631  [Child] Getting orig temperature thresholds of all controllers
00:19:47.631  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:19:47.631  [Child] Waiting for all controllers to trigger AER and reset threshold
00:19:47.631  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:19:47.631  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:19:47.631  [Child] Cleaning up...
00:19:47.891  Asynchronous Event Request test
00:19:47.891  Attached to 0000:00:10.0
00:19:47.891  Reset controller to setup AER completions for this process
00:19:47.891  Registering asynchronous event callbacks...
00:19:47.891  Getting orig temperature thresholds of all controllers
00:19:47.891  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:19:47.891  Setting all controllers temperature threshold low to trigger AER
00:19:47.891  Waiting for all controllers temperature threshold to be set lower
00:19:47.891  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:19:47.891  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:19:47.891  Waiting for all controllers to trigger AER and reset threshold
00:19:47.891  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:19:47.891  Cleaning up...
00:19:47.891  
00:19:47.891  real	0m0.748s
00:19:47.891  user	0m0.286s
00:19:47.891  sys	0m0.360s
00:19:47.891   13:56:30 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:47.891   13:56:30 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:19:47.891  ************************************
00:19:47.891  END TEST nvme_multi_aen
00:19:47.891  ************************************
00:19:47.891   13:56:30 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:19:47.891   13:56:30 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:19:47.891   13:56:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:47.891   13:56:30 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:47.891  ************************************
00:19:47.891  START TEST nvme_startup
00:19:47.891  ************************************
00:19:47.891   13:56:30 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:19:48.150  Initializing NVMe Controllers
00:19:48.150  Attached to 0000:00:10.0
00:19:48.150  Initialization complete.
00:19:48.150  Time used:278641.562      (us).
00:19:48.150  
00:19:48.150  real	0m0.399s
00:19:48.150  user	0m0.132s
00:19:48.150  sys	0m0.223s
00:19:48.150   13:56:30 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:48.150  ************************************
00:19:48.150  END TEST nvme_startup
00:19:48.150  ************************************
00:19:48.150   13:56:30 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:19:48.409   13:56:30 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:19:48.409   13:56:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:48.409   13:56:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:48.409   13:56:30 nvme -- common/autotest_common.sh@10 -- # set +x
00:19:48.409  ************************************
00:19:48.409  START TEST nvme_multi_secondary
00:19:48.409  ************************************
00:19:48.409   13:56:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:19:48.409   13:56:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=82548
00:19:48.409   13:56:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:19:48.409   13:56:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=82549
00:19:48.409   13:56:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:19:48.409   13:56:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:19:51.731  Initializing NVMe Controllers
00:19:51.731  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:51.731  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:19:51.731  Initialization complete. Launching workers.
00:19:51.731  ========================================================
00:19:51.731                                                                             Latency(us)
00:19:51.731  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:51.731  PCIE (0000:00:10.0) NSID 1 from core  1:   33483.49     130.79     477.49     169.86    2036.95
00:19:51.731  ========================================================
00:19:51.731  Total                                  :   33483.49     130.79     477.49     169.86    2036.95
00:19:51.731  
00:19:51.731  Initializing NVMe Controllers
00:19:51.731  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:51.731  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:19:51.731  Initialization complete. Launching workers.
00:19:51.731  ========================================================
00:19:51.731                                                                             Latency(us)
00:19:51.731  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:51.731  PCIE (0000:00:10.0) NSID 1 from core  2:   15308.84      59.80    1044.75     170.73    8793.33
00:19:51.731  ========================================================
00:19:51.731  Total                                  :   15308.84      59.80    1044.75     170.73    8793.33
00:19:51.731  
00:19:51.990   13:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 82548
00:19:53.901  Initializing NVMe Controllers
00:19:53.901  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:53.901  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:19:53.901  Initialization complete. Launching workers.
00:19:53.901  ========================================================
00:19:53.901                                                                             Latency(us)
00:19:53.901  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:53.901  PCIE (0000:00:10.0) NSID 1 from core  0:   41279.80     161.25     387.28     160.75    2093.91
00:19:53.901  ========================================================
00:19:53.901  Total                                  :   41279.80     161.25     387.28     160.75    2093.91
00:19:53.901  
00:19:54.160   13:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 82549
00:19:54.160   13:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=82613
00:19:54.160   13:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:19:54.160   13:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=82614
00:19:54.160   13:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:19:54.160   13:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:19:57.478  Initializing NVMe Controllers
00:19:57.478  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:57.478  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:19:57.478  Initialization complete. Launching workers.
00:19:57.478  ========================================================
00:19:57.478                                                                             Latency(us)
00:19:57.478  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:57.478  PCIE (0000:00:10.0) NSID 1 from core  1:   35758.90     139.68     447.10     168.03    1495.40
00:19:57.478  ========================================================
00:19:57.478  Total                                  :   35758.90     139.68     447.10     168.03    1495.40
00:19:57.478  
00:19:57.737  Initializing NVMe Controllers
00:19:57.737  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:57.737  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:19:57.737  Initialization complete. Launching workers.
00:19:57.737  ========================================================
00:19:57.737                                                                             Latency(us)
00:19:57.737  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:57.737  PCIE (0000:00:10.0) NSID 1 from core  0:   33178.67     129.60     481.90     157.10    2214.65
00:19:57.737  ========================================================
00:19:57.737  Total                                  :   33178.67     129.60     481.90     157.10    2214.65
00:19:57.737  
00:19:59.641  Initializing NVMe Controllers
00:19:59.641  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:19:59.641  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:19:59.641  Initialization complete. Launching workers.
00:19:59.641  ========================================================
00:19:59.641                                                                             Latency(us)
00:19:59.641  Device Information                     :       IOPS      MiB/s    Average        min        max
00:19:59.641  PCIE (0000:00:10.0) NSID 1 from core  2:   17916.80      69.99     892.36     177.01    8687.66
00:19:59.641  ========================================================
00:19:59.641  Total                                  :   17916.80      69.99     892.36     177.01    8687.66
00:19:59.641  
00:19:59.641   13:56:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 82613
00:19:59.641  ************************************
00:19:59.641  END TEST nvme_multi_secondary
00:19:59.641  ************************************
00:19:59.641   13:56:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 82614
00:19:59.641  
00:19:59.641  real	0m11.401s
00:19:59.641  user	0m18.783s
00:19:59.641  sys	0m1.338s
00:19:59.641   13:56:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:59.641   13:56:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:19:59.901   13:56:42 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:19:59.901   13:56:42 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/81923 ]]
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1094 -- # kill 81923
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1095 -- # wait 81923
00:19:59.901  [2024-12-11 13:56:42.440012] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82485) is not found. Dropping the request.
00:19:59.901  [2024-12-11 13:56:42.440124] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82485) is not found. Dropping the request.
00:19:59.901  [2024-12-11 13:56:42.440165] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82485) is not found. Dropping the request.
00:19:59.901  [2024-12-11 13:56:42.440197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82485) is not found. Dropping the request.
00:19:59.901  [2024-12-11 13:56:42.663761] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited.
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:19:59.901   13:56:42 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:59.901   13:56:42 nvme -- common/autotest_common.sh@10 -- # set +x
00:20:00.162  ************************************
00:20:00.162  START TEST bdev_nvme_reset_stuck_adm_cmd
00:20:00.162  ************************************
00:20:00.162   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:20:00.162  * Looking for test storage...
00:20:00.162  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:00.162     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:00.162  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:00.162  		--rc genhtml_branch_coverage=1
00:20:00.162  		--rc genhtml_function_coverage=1
00:20:00.162  		--rc genhtml_legend=1
00:20:00.162  		--rc geninfo_all_blocks=1
00:20:00.162  		--rc geninfo_unexecuted_blocks=1
00:20:00.162  		
00:20:00.162  		'
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:00.162  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:00.162  		--rc genhtml_branch_coverage=1
00:20:00.162  		--rc genhtml_function_coverage=1
00:20:00.162  		--rc genhtml_legend=1
00:20:00.162  		--rc geninfo_all_blocks=1
00:20:00.162  		--rc geninfo_unexecuted_blocks=1
00:20:00.162  		
00:20:00.162  		'
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:00.162  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:00.162  		--rc genhtml_branch_coverage=1
00:20:00.162  		--rc genhtml_function_coverage=1
00:20:00.162  		--rc genhtml_legend=1
00:20:00.162  		--rc geninfo_all_blocks=1
00:20:00.162  		--rc geninfo_unexecuted_blocks=1
00:20:00.162  		
00:20:00.162  		'
00:20:00.162    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:00.162  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:00.162  		--rc genhtml_branch_coverage=1
00:20:00.162  		--rc genhtml_function_coverage=1
00:20:00.162  		--rc genhtml_legend=1
00:20:00.162  		--rc geninfo_all_blocks=1
00:20:00.162  		--rc geninfo_unexecuted_blocks=1
00:20:00.162  		
00:20:00.162  		'
00:20:00.162   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:20:00.162   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:20:00.163   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:20:00.163   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:20:00.163   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:20:00.163    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:20:00.163    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:20:00.163    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:20:00.163    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:20:00.163     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:20:00.163     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:20:00.163     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:20:00.163     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:20:00.163      13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:20:00.163      13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:20:00.422     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:20:00.422     13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:20:00.422    13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=82768
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 82768
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 82768 ']'
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:00.422  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:00.422   13:56:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:20:00.422  [2024-12-11 13:56:43.073584] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:20:00.422  [2024-12-11 13:56:43.073785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82768 ]
00:20:00.682  [2024-12-11 13:56:43.304873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:20:00.940  [2024-12-11 13:56:43.509145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:20:00.940  [2024-12-11 13:56:43.509292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:20:00.940  [2024-12-11 13:56:43.509437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:20:00.940  [2024-12-11 13:56:43.509461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:20:01.878   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:01.878   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:20:01.878   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:20:01.878   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:01.878   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:20:02.137  nvme0n1
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:02.137    13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_uzuWm.txt
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:20:02.137  true
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:02.137    13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733925404
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=82798
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:20:02.137   13:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:20:04.040   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:20:04.040   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:04.040   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:20:04.040  [2024-12-11 13:56:46.753086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:20:04.040  [2024-12-11 13:56:46.753655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:20:04.040  [2024-12-11 13:56:46.753714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:20:04.040  [2024-12-11 13:56:46.753736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:20:04.040  [2024-12-11 13:56:46.755818] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:20:04.040   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:04.040  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 82798
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 82798
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 82798
00:20:04.041    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:20:04.041    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_uzuWm.txt
00:20:04.041   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:20:04.041    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:20:04.041    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:20:04.041    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:20:04.041     13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:20:04.041     13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:20:04.041      13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:20:04.300     13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:20:04.300     13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:20:04.300      13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_uzuWm.txt
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 82768
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 82768 ']'
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 82768
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:04.300    13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82768
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:04.300  killing process with pid 82768
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82768'
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 82768
00:20:04.300   13:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 82768
00:20:06.834   13:56:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:20:06.834   13:56:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:20:06.834  
00:20:06.834  real	0m6.894s
00:20:06.834  user	0m24.076s
00:20:06.834  sys	0m0.942s
00:20:06.834   13:56:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:06.834   13:56:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:20:06.834  ************************************
00:20:06.834  END TEST bdev_nvme_reset_stuck_adm_cmd
00:20:06.834  ************************************
00:20:07.092   13:56:49 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:20:07.092   13:56:49 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:20:07.092   13:56:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:07.092   13:56:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:07.092   13:56:49 nvme -- common/autotest_common.sh@10 -- # set +x
00:20:07.092  ************************************
00:20:07.092  START TEST nvme_fio
00:20:07.092  ************************************
00:20:07.092   13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:20:07.092    13:56:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:20:07.092    13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:20:07.092    13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:20:07.092    13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:20:07.092     13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:20:07.092     13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:20:07.092    13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:20:07.092    13:56:49 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0')
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:20:07.092   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:20:07.351   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:20:07.351   13:56:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:20:07.610   13:56:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:20:07.610   13:56:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:20:07.610    13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:20:07.610    13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:20:07.610    13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]]
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:20:07.610   13:56:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:20:07.870  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:20:07.870  fio-3.35
00:20:07.870  Starting 1 thread
00:20:11.155  
00:20:11.155  test: (groupid=0, jobs=1): err= 0: pid=82945: Wed Dec 11 13:56:53 2024
00:20:11.155    read: IOPS=17.9k, BW=70.0MiB/s (73.4MB/s)(140MiB/2001msec)
00:20:11.155      slat (usec): min=4, max=126, avg= 5.70, stdev= 1.92
00:20:11.155      clat (usec): min=252, max=9996, avg=3548.24, stdev=464.14
00:20:11.155       lat (usec): min=257, max=10122, avg=3553.94, stdev=464.79
00:20:11.155      clat percentiles (usec):
00:20:11.155       |  1.00th=[ 3064],  5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261],
00:20:11.155       | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3490],
00:20:11.155       | 70.00th=[ 3556], 80.00th=[ 3720], 90.00th=[ 4228], 95.00th=[ 4359],
00:20:11.155       | 99.00th=[ 4621], 99.50th=[ 5473], 99.90th=[ 8160], 99.95th=[ 8455],
00:20:11.155       | 99.99th=[ 9765]
00:20:11.156     bw (  KiB/s): min=65984, max=77624, per=100.00%, avg=72506.67, stdev=5945.89, samples=3
00:20:11.156     iops        : min=16496, max=19406, avg=18126.67, stdev=1486.47, samples=3
00:20:11.156    write: IOPS=17.9k, BW=70.0MiB/s (73.4MB/s)(140MiB/2001msec); 0 zone resets
00:20:11.156      slat (nsec): min=4665, max=61881, avg=5921.13, stdev=1761.17
00:20:11.156      clat (usec): min=233, max=9849, avg=3562.29, stdev=485.74
00:20:11.156       lat (usec): min=238, max=9869, avg=3568.21, stdev=486.42
00:20:11.156      clat percentiles (usec):
00:20:11.156       |  1.00th=[ 3064],  5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261],
00:20:11.156       | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3490],
00:20:11.156       | 70.00th=[ 3589], 80.00th=[ 3752], 90.00th=[ 4228], 95.00th=[ 4359],
00:20:11.156       | 99.00th=[ 4686], 99.50th=[ 6128], 99.90th=[ 8356], 99.95th=[ 8586],
00:20:11.156       | 99.99th=[ 9634]
00:20:11.156     bw (  KiB/s): min=66352, max=77496, per=100.00%, avg=72450.67, stdev=5646.18, samples=3
00:20:11.156     iops        : min=16588, max=19374, avg=18112.67, stdev=1411.54, samples=3
00:20:11.156    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01%
00:20:11.156    lat (msec)   : 2=0.08%, 4=83.20%, 10=16.68%
00:20:11.156    cpu          : usr=99.85%, sys=0.10%, ctx=5, majf=0, minf=609
00:20:11.156    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:20:11.156       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:20:11.156       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:20:11.156       issued rwts: total=35848,35865,0,0 short=0,0,0,0 dropped=0,0,0,0
00:20:11.156       latency   : target=0, window=0, percentile=100.00%, depth=128
00:20:11.156  
00:20:11.156  Run status group 0 (all jobs):
00:20:11.156     READ: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2001-2001msec
00:20:11.156    WRITE: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2001-2001msec
00:20:11.156  -----------------------------------------------------
00:20:11.156  Suppressions used:
00:20:11.156    count      bytes template
00:20:11.156        1         32 /usr/src/fio/parse.c
00:20:11.156  -----------------------------------------------------
00:20:11.156  
00:20:11.156   13:56:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:20:11.156   13:56:53 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:20:11.156  
00:20:11.156  real	0m4.231s
00:20:11.156  user	0m3.378s
00:20:11.156  sys	0m0.513s
00:20:11.156   13:56:53 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:11.156   13:56:53 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:20:11.156  ************************************
00:20:11.156  END TEST nvme_fio
00:20:11.156  ************************************
00:20:11.156  
00:20:11.156  real	0m49.976s
00:20:11.156  user	2m12.919s
00:20:11.156  sys	0m11.918s
00:20:11.156   13:56:53 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:11.156   13:56:53 nvme -- common/autotest_common.sh@10 -- # set +x
00:20:11.156  ************************************
00:20:11.156  END TEST nvme
00:20:11.156  ************************************
00:20:11.415   13:56:53  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:20:11.415   13:56:53  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:20:11.415   13:56:53  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:11.415   13:56:53  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:11.415   13:56:53  -- common/autotest_common.sh@10 -- # set +x
00:20:11.415  ************************************
00:20:11.415  START TEST nvme_scc
00:20:11.415  ************************************
00:20:11.415   13:56:54 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:20:11.415  * Looking for test storage...
00:20:11.415  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:20:11.415     13:56:54 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:11.415      13:56:54 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version
00:20:11.415      13:56:54 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:11.415     13:56:54 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@345 -- # : 1
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:20:11.415     13:56:54 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:11.415      13:56:54 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:20:11.674     13:56:54 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:20:11.674     13:56:54 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:11.674     13:56:54 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:11.674     13:56:54 nvme_scc -- scripts/common.sh@368 -- # return 0
00:20:11.674     13:56:54 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:11.674     13:56:54 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:11.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:11.674  		--rc genhtml_branch_coverage=1
00:20:11.674  		--rc genhtml_function_coverage=1
00:20:11.674  		--rc genhtml_legend=1
00:20:11.674  		--rc geninfo_all_blocks=1
00:20:11.674  		--rc geninfo_unexecuted_blocks=1
00:20:11.674  		
00:20:11.674  		'
00:20:11.674     13:56:54 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:11.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:11.674  		--rc genhtml_branch_coverage=1
00:20:11.674  		--rc genhtml_function_coverage=1
00:20:11.674  		--rc genhtml_legend=1
00:20:11.674  		--rc geninfo_all_blocks=1
00:20:11.674  		--rc geninfo_unexecuted_blocks=1
00:20:11.674  		
00:20:11.674  		'
00:20:11.674     13:56:54 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:11.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:11.674  		--rc genhtml_branch_coverage=1
00:20:11.674  		--rc genhtml_function_coverage=1
00:20:11.674  		--rc genhtml_legend=1
00:20:11.674  		--rc geninfo_all_blocks=1
00:20:11.674  		--rc geninfo_unexecuted_blocks=1
00:20:11.674  		
00:20:11.674  		'
00:20:11.674     13:56:54 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:11.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:11.674  		--rc genhtml_branch_coverage=1
00:20:11.674  		--rc genhtml_function_coverage=1
00:20:11.674  		--rc genhtml_legend=1
00:20:11.674  		--rc geninfo_all_blocks=1
00:20:11.674  		--rc geninfo_unexecuted_blocks=1
00:20:11.674  		
00:20:11.674  		'
00:20:11.674    13:56:54 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:20:11.674       13:56:54 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:20:11.674      13:56:54 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:20:11.674     13:56:54 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:20:11.674     13:56:54 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:20:11.674      13:56:54 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:20:11.674      13:56:54 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:11.674      13:56:54 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:11.674      13:56:54 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:11.674       13:56:54 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:20:11.675       13:56:54 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:20:11.675       13:56:54 nvme_scc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:20:11.675       13:56:54 nvme_scc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:20:11.675       13:56:54 nvme_scc -- paths/export.sh@6 -- # export PATH
00:20:11.675       13:56:54 nvme_scc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:20:11.675     13:56:54 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:20:11.675    13:56:54 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:20:11.675    13:56:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:20:11.675   13:56:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:20:11.675   13:56:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:20:11.675   13:56:54 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:20:11.933  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:20:11.933  Waiting for block devices as requested
00:20:11.933  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:20:12.201   13:56:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:20:12.201   13:56:54 nvme_scc -- scripts/common.sh@18 -- # local i
00:20:12.201   13:56:54 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:20:12.201   13:56:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:20:12.201   13:56:54 nvme_scc -- scripts/common.sh@27 -- # return 0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@18 -- # shift
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340               "'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340               '
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:20:12.201    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:20:12.201   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:20:12.202    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.202   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:20:12.203    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.203   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:20:12.204    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.204   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@18 -- # shift
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:20:12.205    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.205   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@18 -- # shift
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:20:12.206    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:20:12.206   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.465   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:20:12.465    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:20:12.466    13:56:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:20:12.466   13:56:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:20:12.466   13:56:55 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:20:12.466    13:56:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@192 -- # (( 1 == 0 ))
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:20:12.466      13:56:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:20:12.466     13:56:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@207 -- # (( 1 > 0 ))
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@208 -- # echo nvme0
00:20:12.466    13:56:55 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:20:12.466   13:56:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0
00:20:12.466   13:56:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:20:12.466   13:56:55 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:20:12.724  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:20:12.983  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:20:13.977   13:56:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:20:13.977   13:56:56 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:20:13.977   13:56:56 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:13.977   13:56:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:20:13.977  ************************************
00:20:13.977  START TEST nvme_simple_copy
00:20:13.977  ************************************
00:20:13.977   13:56:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:20:13.977  Initializing NVMe Controllers
00:20:13.977  Attaching to 0000:00:10.0
00:20:13.977  Controller supports SCC. Attached to 0000:00:10.0
00:20:13.977    Namespace ID: 1 size: 5GB
00:20:13.977  Initialization complete.
00:20:13.977  
00:20:13.977  Controller QEMU NVMe Ctrl       (12340               )
00:20:13.977  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:20:13.977  Namespace Block Size:4096
00:20:13.977  Writing LBAs 0 to 63 with Random Data
00:20:13.977  Copied LBAs from 0 - 63 to the Destination LBA 256
00:20:13.977  LBAs matching Written Data: 64
00:20:14.236  
00:20:14.236  real	0m0.337s
00:20:14.236  user	0m0.136s
00:20:14.236  sys	0m0.102s
00:20:14.236  ************************************
00:20:14.236  END TEST nvme_simple_copy
00:20:14.236  ************************************
00:20:14.236   13:56:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:14.236   13:56:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:20:14.236  
00:20:14.236  real	0m2.828s
00:20:14.236  user	0m0.769s
00:20:14.236  sys	0m2.005s
00:20:14.236   13:56:56 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:14.236   13:56:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:20:14.236  ************************************
00:20:14.236  END TEST nvme_scc
00:20:14.236  ************************************
00:20:14.236   13:56:56  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:20:14.236   13:56:56  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:20:14.236   13:56:56  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:20:14.236   13:56:56  -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]]
00:20:14.236   13:56:56  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:20:14.236   13:56:56  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:20:14.236   13:56:56  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:14.236   13:56:56  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:14.236   13:56:56  -- common/autotest_common.sh@10 -- # set +x
00:20:14.236  ************************************
00:20:14.236  START TEST nvme_rpc
00:20:14.236  ************************************
00:20:14.236   13:56:56 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:20:14.236  * Looking for test storage...
00:20:14.236  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:20:14.236    13:56:56 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:14.236     13:56:56 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:20:14.236     13:56:56 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:14.496     13:56:57 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:14.496    13:56:57 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:14.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:14.496  		--rc genhtml_branch_coverage=1
00:20:14.496  		--rc genhtml_function_coverage=1
00:20:14.496  		--rc genhtml_legend=1
00:20:14.496  		--rc geninfo_all_blocks=1
00:20:14.496  		--rc geninfo_unexecuted_blocks=1
00:20:14.496  		
00:20:14.496  		'
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:14.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:14.496  		--rc genhtml_branch_coverage=1
00:20:14.496  		--rc genhtml_function_coverage=1
00:20:14.496  		--rc genhtml_legend=1
00:20:14.496  		--rc geninfo_all_blocks=1
00:20:14.496  		--rc geninfo_unexecuted_blocks=1
00:20:14.496  		
00:20:14.496  		'
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:14.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:14.496  		--rc genhtml_branch_coverage=1
00:20:14.496  		--rc genhtml_function_coverage=1
00:20:14.496  		--rc genhtml_legend=1
00:20:14.496  		--rc geninfo_all_blocks=1
00:20:14.496  		--rc geninfo_unexecuted_blocks=1
00:20:14.496  		
00:20:14.496  		'
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:14.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:14.496  		--rc genhtml_branch_coverage=1
00:20:14.496  		--rc genhtml_function_coverage=1
00:20:14.496  		--rc genhtml_legend=1
00:20:14.496  		--rc geninfo_all_blocks=1
00:20:14.496  		--rc geninfo_unexecuted_blocks=1
00:20:14.496  		
00:20:14.496  		'
00:20:14.496   13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:20:14.496    13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:20:14.496     13:56:57 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:20:14.496     13:56:57 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:20:14.496     13:56:57 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:20:14.496     13:56:57 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:20:14.496      13:56:57 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:20:14.496      13:56:57 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:20:14.496     13:56:57 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:20:14.496     13:56:57 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:20:14.496    13:56:57 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:20:14.496   13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:20:14.496   13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:20:14.496   13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83401
00:20:14.496   13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:20:14.496   13:56:57 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83401
00:20:14.496   13:56:57 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83401 ']'
00:20:14.496   13:56:57 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:14.496   13:56:57 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:14.496  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:14.496   13:56:57 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:14.496   13:56:57 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:14.496   13:56:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:20:14.496  [2024-12-11 13:56:57.251825] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:20:14.496  [2024-12-11 13:56:57.252085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83401 ]
00:20:14.755  [2024-12-11 13:56:57.456902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:15.014  [2024-12-11 13:56:57.652493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:20:15.014  [2024-12-11 13:56:57.652530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:20:15.950   13:56:58 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:15.950   13:56:58 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:20:15.950   13:56:58 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:20:16.518  Nvme0n1
00:20:16.518   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:20:16.518   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:20:16.518  request:
00:20:16.518  {
00:20:16.518    "bdev_name": "Nvme0n1",
00:20:16.518    "filename": "non_existing_file",
00:20:16.518    "method": "bdev_nvme_apply_firmware",
00:20:16.518    "req_id": 1
00:20:16.518  }
00:20:16.518  Got JSON-RPC error response
00:20:16.518  response:
00:20:16.518  {
00:20:16.518    "code": -32603,
00:20:16.518    "message": "open file failed."
00:20:16.518  }
00:20:16.518   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:20:16.518   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:20:16.518   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:20:16.778   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:20:16.778   13:56:59 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83401
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83401 ']'
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83401
00:20:16.778    13:56:59 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:16.778    13:56:59 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83401
00:20:16.778  killing process with pid 83401
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83401'
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@973 -- # kill 83401
00:20:16.778   13:56:59 nvme_rpc -- common/autotest_common.sh@978 -- # wait 83401
00:20:20.066  
00:20:20.066  real	0m5.232s
00:20:20.066  user	0m9.763s
00:20:20.066  sys	0m0.846s
00:20:20.066  ************************************
00:20:20.066  END TEST nvme_rpc
00:20:20.066  ************************************
00:20:20.066   13:57:02 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:20.066   13:57:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:20:20.066   13:57:02  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:20:20.066   13:57:02  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:20.066   13:57:02  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:20.066   13:57:02  -- common/autotest_common.sh@10 -- # set +x
00:20:20.066  ************************************
00:20:20.066  START TEST nvme_rpc_timeouts
00:20:20.066  ************************************
00:20:20.066   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:20:20.066  * Looking for test storage...
00:20:20.066  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:20:20.066    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:20.066     13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version
00:20:20.066     13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:20.066    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:20.066     13:57:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:20:20.066    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:20:20.067    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:20.067    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:20.067    13:57:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:20:20.067    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:20.067    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:20.067  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:20.067  		--rc genhtml_branch_coverage=1
00:20:20.067  		--rc genhtml_function_coverage=1
00:20:20.067  		--rc genhtml_legend=1
00:20:20.067  		--rc geninfo_all_blocks=1
00:20:20.067  		--rc geninfo_unexecuted_blocks=1
00:20:20.067  		
00:20:20.067  		'
00:20:20.067    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:20.067  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:20.067  		--rc genhtml_branch_coverage=1
00:20:20.067  		--rc genhtml_function_coverage=1
00:20:20.067  		--rc genhtml_legend=1
00:20:20.067  		--rc geninfo_all_blocks=1
00:20:20.067  		--rc geninfo_unexecuted_blocks=1
00:20:20.067  		
00:20:20.067  		'
00:20:20.067    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:20.067  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:20.067  		--rc genhtml_branch_coverage=1
00:20:20.067  		--rc genhtml_function_coverage=1
00:20:20.067  		--rc genhtml_legend=1
00:20:20.067  		--rc geninfo_all_blocks=1
00:20:20.067  		--rc geninfo_unexecuted_blocks=1
00:20:20.067  		
00:20:20.067  		'
00:20:20.067    13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:20.067  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:20.067  		--rc genhtml_branch_coverage=1
00:20:20.067  		--rc genhtml_function_coverage=1
00:20:20.067  		--rc genhtml_legend=1
00:20:20.067  		--rc geninfo_all_blocks=1
00:20:20.067  		--rc geninfo_unexecuted_blocks=1
00:20:20.067  		
00:20:20.067  		'
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83487
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83487
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83520
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83520
00:20:20.067   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 83520 ']'
00:20:20.067   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:20.067   13:57:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:20:20.067   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:20.067  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:20.067   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:20.067   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:20.067   13:57:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:20:20.067  [2024-12-11 13:57:02.438860] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:20:20.067  [2024-12-11 13:57:02.438994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83520 ]
00:20:20.067  [2024-12-11 13:57:02.613838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:20.067  [2024-12-11 13:57:02.753376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:20:20.067  [2024-12-11 13:57:02.753413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:20:21.444   13:57:03 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:21.444   13:57:03 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:20:21.444  Checking default timeout settings:
00:20:21.444   13:57:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:20:21.444   13:57:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:20:21.444  Making settings changes with rpc:
00:20:21.444   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:20:21.444   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:20:21.703  Check default vs. modified settings:
00:20:21.703   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:20:21.703   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:20:21.962   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:20:21.962   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:20:21.962    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83487
00:20:21.962    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:20:21.962    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:20:21.962   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:20:21.962    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:20:21.962    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83487
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:20:21.963  Setting action_on_timeout is changed as expected.
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83487
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83487
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:20:21.963    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:20:21.963  Setting timeout_us is changed as expected.
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:20:21.963   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:20:22.222    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83487
00:20:22.222    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:20:22.222    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:20:22.222    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83487
00:20:22.222    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:20:22.222    13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:20:22.222  Setting timeout_admin_us is changed as expected.
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83487 /tmp/settings_modified_83487
00:20:22.222   13:57:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83520
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 83520 ']'
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 83520
00:20:22.223    13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:22.223    13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83520
00:20:22.223  killing process with pid 83520
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83520'
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 83520
00:20:22.223   13:57:04 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 83520
00:20:24.770  RPC TIMEOUT SETTING TEST PASSED.
00:20:24.770   13:57:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:20:24.770  ************************************
00:20:24.770  END TEST nvme_rpc_timeouts
00:20:24.770  ************************************
00:20:24.770  
00:20:24.770  real	0m5.230s
00:20:24.770  user	0m9.992s
00:20:24.770  sys	0m0.846s
00:20:24.770   13:57:07 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:24.770   13:57:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:20:24.770    13:57:07  -- spdk/autotest.sh@239 -- # uname -s
00:20:24.770   13:57:07  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:20:24.770   13:57:07  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:20:24.770   13:57:07  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:24.770   13:57:07  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:24.770   13:57:07  -- common/autotest_common.sh@10 -- # set +x
00:20:24.770  ************************************
00:20:24.770  START TEST sw_hotplug
00:20:24.770  ************************************
00:20:24.770   13:57:07 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:20:25.047  * Looking for test storage...
00:20:25.047  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:20:25.047    13:57:07 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:25.047     13:57:07 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version
00:20:25.047     13:57:07 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:25.047    13:57:07 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:25.047     13:57:07 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:25.047    13:57:07 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:20:25.047    13:57:07 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:25.047    13:57:07 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:25.047  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.047  		--rc genhtml_branch_coverage=1
00:20:25.047  		--rc genhtml_function_coverage=1
00:20:25.047  		--rc genhtml_legend=1
00:20:25.047  		--rc geninfo_all_blocks=1
00:20:25.047  		--rc geninfo_unexecuted_blocks=1
00:20:25.047  		
00:20:25.047  		'
00:20:25.047    13:57:07 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:25.047  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.047  		--rc genhtml_branch_coverage=1
00:20:25.047  		--rc genhtml_function_coverage=1
00:20:25.047  		--rc genhtml_legend=1
00:20:25.048  		--rc geninfo_all_blocks=1
00:20:25.048  		--rc geninfo_unexecuted_blocks=1
00:20:25.048  		
00:20:25.048  		'
00:20:25.048    13:57:07 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:25.048  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.048  		--rc genhtml_branch_coverage=1
00:20:25.048  		--rc genhtml_function_coverage=1
00:20:25.048  		--rc genhtml_legend=1
00:20:25.048  		--rc geninfo_all_blocks=1
00:20:25.048  		--rc geninfo_unexecuted_blocks=1
00:20:25.048  		
00:20:25.048  		'
00:20:25.048    13:57:07 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:25.048  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.048  		--rc genhtml_branch_coverage=1
00:20:25.048  		--rc genhtml_function_coverage=1
00:20:25.048  		--rc genhtml_legend=1
00:20:25.048  		--rc geninfo_all_blocks=1
00:20:25.048  		--rc geninfo_unexecuted_blocks=1
00:20:25.048  		
00:20:25.048  		'
00:20:25.048   13:57:07 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:20:25.307  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:20:25.307  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:20:26.244   13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:20:26.244   13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:20:26.244   13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:20:26.244    13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@233 -- # local class
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:20:26.244       13:57:08 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:20:26.244       13:57:08 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:20:26.244       13:57:08 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:20:26.244      13:57:08 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@18 -- # local i
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:20:26.244     13:57:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@328 -- # (( 1 ))
00:20:26.244    13:57:08 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0
00:20:26.244   13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1
00:20:26.244   13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:20:26.244   13:57:08 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:20:26.503  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:20:26.761  Waiting for block devices as requested
00:20:26.761  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:20:26.761   13:57:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0
00:20:26.761   13:57:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:20:27.020  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:20:27.278  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:20:27.278  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:20:28.216   13:57:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=84055
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:20:28.216   13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:20:28.216    13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:20:28.216    13:57:10 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:20:28.216    13:57:10 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:20:28.216    13:57:10 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:20:28.216    13:57:10 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:20:28.216     13:57:10 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:20:28.216     13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:20:28.216     13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:20:28.216     13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:20:28.216     13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:20:28.216     13:57:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:20:28.474  Initializing NVMe Controllers
00:20:28.474  Attaching to 0000:00:10.0
00:20:28.474  Attached to 0000:00:10.0
00:20:28.474  Initialization complete. Starting I/O...
00:20:28.474  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:20:28.474  
00:20:29.423  QEMU NVMe Ctrl       (12340               ):       1844 I/Os completed (+1844)
00:20:29.423  
00:20:30.801  QEMU NVMe Ctrl       (12340               ):       4048 I/Os completed (+2204)
00:20:30.801  
00:20:31.739  QEMU NVMe Ctrl       (12340               ):       6632 I/Os completed (+2584)
00:20:31.739  
00:20:32.675  QEMU NVMe Ctrl       (12340               ):       9224 I/Os completed (+2592)
00:20:32.675  
00:20:33.612  QEMU NVMe Ctrl       (12340               ):      11756 I/Os completed (+2532)
00:20:33.612  
00:20:34.548     13:57:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:20:34.548     13:57:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:20:34.548     13:57:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:20:34.548  [2024-12-11 13:57:16.984485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:20:34.548  Controller removed: QEMU NVMe Ctrl       (12340               )
00:20:34.548  [2024-12-11 13:57:16.986336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  [2024-12-11 13:57:16.986568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  [2024-12-11 13:57:16.986732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  [2024-12-11 13:57:16.986857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:20:34.548  [2024-12-11 13:57:16.994964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  [2024-12-11 13:57:16.995168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  [2024-12-11 13:57:16.995320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548  [2024-12-11 13:57:16.995438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:20:34.548  
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:20:34.548     13:57:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:20:34.548  Attaching to 0000:00:10.0
00:20:34.548  Attached to 0000:00:10.0
00:20:35.486  QEMU NVMe Ctrl       (12340               ):       2352 I/Os completed (+2352)
00:20:35.486  
00:20:36.433  QEMU NVMe Ctrl       (12340               ):       4916 I/Os completed (+2564)
00:20:36.433  
00:20:37.809  QEMU NVMe Ctrl       (12340               ):       7504 I/Os completed (+2588)
00:20:37.809  
00:20:38.746  QEMU NVMe Ctrl       (12340               ):      10089 I/Os completed (+2585)
00:20:38.746  
00:20:39.683  QEMU NVMe Ctrl       (12340               ):      12575 I/Os completed (+2486)
00:20:39.683  
00:20:40.620  QEMU NVMe Ctrl       (12340               ):      15035 I/Os completed (+2460)
00:20:40.620  
00:20:40.620     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:20:40.620     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:20:40.620     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:20:40.620     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:20:40.620  [2024-12-11 13:57:23.283049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:20:40.620  Controller removed: QEMU NVMe Ctrl       (12340               )
00:20:40.620  [2024-12-11 13:57:23.284773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  [2024-12-11 13:57:23.284961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  [2024-12-11 13:57:23.285115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  [2024-12-11 13:57:23.285225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:20:40.620  [2024-12-11 13:57:23.293145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  [2024-12-11 13:57:23.293208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  [2024-12-11 13:57:23.293234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620  [2024-12-11 13:57:23.293253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:40.620     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:20:40.620     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:20:40.879     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:20:40.879     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:20:40.879     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:20:40.879     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:20:40.879     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:20:40.879     13:57:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:20:40.879  Attaching to 0000:00:10.0
00:20:40.879  Attached to 0000:00:10.0
00:20:41.446  QEMU NVMe Ctrl       (12340               ):       1572 I/Os completed (+1572)
00:20:41.446  
00:20:42.821  QEMU NVMe Ctrl       (12340               ):       4132 I/Os completed (+2560)
00:20:42.821  
00:20:43.753  QEMU NVMe Ctrl       (12340               ):       6719 I/Os completed (+2587)
00:20:43.753  
00:20:44.688  QEMU NVMe Ctrl       (12340               ):       9283 I/Os completed (+2564)
00:20:44.688  
00:20:45.623  QEMU NVMe Ctrl       (12340               ):      11739 I/Os completed (+2456)
00:20:45.623  
00:20:46.559  QEMU NVMe Ctrl       (12340               ):      14311 I/Os completed (+2572)
00:20:46.559  
00:20:46.819     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:20:46.819     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:20:46.819     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:20:46.819     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:20:46.819  [2024-12-11 13:57:29.578284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:20:46.819  Controller removed: QEMU NVMe Ctrl       (12340               )
00:20:46.819  [2024-12-11 13:57:29.580076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  [2024-12-11 13:57:29.580246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  [2024-12-11 13:57:29.580361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  [2024-12-11 13:57:29.580476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:20:46.819  [2024-12-11 13:57:29.588447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  [2024-12-11 13:57:29.588629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  [2024-12-11 13:57:29.588770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:46.819  [2024-12-11 13:57:29.588891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:20:47.078     13:57:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:20:47.078  Attaching to 0000:00:10.0
00:20:47.078  Attached to 0000:00:10.0
00:20:47.078  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:20:47.337  [2024-12-11 13:57:29.865563] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:20:53.906     13:57:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:20:53.906     13:57:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:20:53.906    13:57:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=24.88
00:20:53.906    13:57:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 24.88
00:20:53.906    13:57:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:20:53.906   13:57:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.88
00:20:53.906   13:57:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.88 1
00:20:53.906  remove_attach_helper took 24.88s to complete (handling 1 nvme drive(s)) 13:57:35 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 84055
00:20:59.179  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (84055) - No such process
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 84055
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=84400
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:20:59.179   13:57:41 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 84400
00:20:59.179   13:57:41 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 84400 ']'
00:20:59.179   13:57:41 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:59.179   13:57:41 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:59.179  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:59.179   13:57:41 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:59.179   13:57:41 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:59.179   13:57:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:20:59.438  [2024-12-11 13:57:41.968959] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization...
00:20:59.438  [2024-12-11 13:57:41.969424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84400 ]
00:20:59.438  [2024-12-11 13:57:42.162326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:59.697  [2024-12-11 13:57:42.302693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:21:00.636   13:57:43 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:00.636   13:57:43 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:21:00.636   13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:21:00.636   13:57:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:00.636   13:57:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:00.636   13:57:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:00.636   13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:21:00.636   13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:21:00.636    13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:21:00.636    13:57:43 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:21:00.636    13:57:43 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:21:00.636    13:57:43 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:21:00.636    13:57:43 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:21:00.636     13:57:43 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:21:00.636     13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:21:00.636     13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:21:00.636     13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:21:00.636     13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:21:00.636     13:57:43 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:21:07.202     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:07.202     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:21:07.202     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:21:07.202     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:21:07.202     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:07.202      13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:07.202      13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:07.202      13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:07.202       13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:07.202       13:57:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.202       13:57:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:07.202       13:57:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.202  [2024-12-11 13:57:49.495839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:21:07.202     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:21:07.202  [2024-12-11 13:57:49.498273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:07.203  [2024-12-11 13:57:49.498324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:07.203  [2024-12-11 13:57:49.498347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:07.203  [2024-12-11 13:57:49.498383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:07.203  [2024-12-11 13:57:49.498399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:07.203  [2024-12-11 13:57:49.498416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:07.203  [2024-12-11 13:57:49.498432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:07.203  [2024-12-11 13:57:49.498449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:07.203  [2024-12-11 13:57:49.498462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:07.203  [2024-12-11 13:57:49.498484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:07.203  [2024-12-11 13:57:49.498497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:07.203  [2024-12-11 13:57:49.498530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:07.203     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:21:07.461     13:57:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:07.461      13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:07.461      13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:07.461      13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:07.461       13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:07.461       13:57:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.461       13:57:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:07.461       13:57:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:21:07.461     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:21:07.720     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:21:07.720     13:57:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:21:14.282      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:21:14.282      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:14.282      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:14.282       13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:14.282       13:57:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.282       13:57:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:14.282       13:57:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:14.282      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:14.282      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:14.282      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:14.282       13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:14.282       13:57:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.282       13:57:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:14.282       13:57:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:21:14.282     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:21:14.282  [2024-12-11 13:57:56.395861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:21:14.282  [2024-12-11 13:57:56.398243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:14.282  [2024-12-11 13:57:56.398302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:14.282  [2024-12-11 13:57:56.398327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:14.282  [2024-12-11 13:57:56.398355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:14.282  [2024-12-11 13:57:56.398372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:14.282  [2024-12-11 13:57:56.398386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:14.282  [2024-12-11 13:57:56.398405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:14.282  [2024-12-11 13:57:56.398419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:14.282  [2024-12-11 13:57:56.398436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:14.282  [2024-12-11 13:57:56.398452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:14.282  [2024-12-11 13:57:56.398468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:14.283  [2024-12-11 13:57:56.398482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:14.283      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:14.283      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:14.283       13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:14.283      13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:14.283       13:57:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.283       13:57:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:14.283       13:57:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:21:14.283     13:57:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:21:14.541     13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:21:14.541     13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:21:14.541     13:57:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:21:21.110      13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:21:21.110      13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:21.110      13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:21.110       13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:21.110       13:58:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:21.110       13:58:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:21.110       13:58:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:21:21.110  [2024-12-11 13:58:03.195950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:21:21.110  [2024-12-11 13:58:03.198676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:21.110  [2024-12-11 13:58:03.198734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:21.110  [2024-12-11 13:58:03.198755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:21.110  [2024-12-11 13:58:03.198786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:21.110  [2024-12-11 13:58:03.198801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:21.110  [2024-12-11 13:58:03.198818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:21.110  [2024-12-11 13:58:03.198834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:21.110  [2024-12-11 13:58:03.198851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:21.110  [2024-12-11 13:58:03.198865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:21.110  [2024-12-11 13:58:03.198884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:21.110  [2024-12-11 13:58:03.198897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:21.110  [2024-12-11 13:58:03.198915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:21.110      13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:21.110       13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:21.110       13:58:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:21.110      13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:21.110       13:58:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:21.110      13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:21.110       13:58:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:21:21.110     13:58:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:21:27.675      13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:21:27.675      13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:27.675      13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:27.675       13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:27.675       13:58:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.675       13:58:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:27.675       13:58:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@719 -- # time=26.12
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@720 -- # echo 26.12
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:21:27.675   13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.12
00:21:27.675   13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.12 1
00:21:27.675  remove_attach_helper took 26.12s to complete (handling 1 nvme drive(s)) 13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:21:27.675   13:58:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.675   13:58:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:27.675   13:58:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.675   13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:21:27.675   13:58:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.675   13:58:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:27.675   13:58:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.675   13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:21:27.675   13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:21:27.675    13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:21:27.675    13:58:09 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:21:27.675     13:58:09 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:21:27.675     13:58:09 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:21:32.982     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:32.982     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:21:32.982     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:21:32.982     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:21:32.982     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:32.982      13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:32.982       13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:32.982       13:58:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:32.982       13:58:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:32.982      13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:32.983      13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:32.983       13:58:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:32.983     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:21:32.983     13:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:21:32.983  [2024-12-11 13:58:15.647618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:21:32.983  [2024-12-11 13:58:15.650002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:32.983  [2024-12-11 13:58:15.650056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:32.983  [2024-12-11 13:58:15.650084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:32.983  [2024-12-11 13:58:15.650110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:32.983  [2024-12-11 13:58:15.650127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:32.983  [2024-12-11 13:58:15.650142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:32.983  [2024-12-11 13:58:15.650162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:32.983  [2024-12-11 13:58:15.650176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:32.983  [2024-12-11 13:58:15.650193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:32.983  [2024-12-11 13:58:15.650208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:32.983  [2024-12-11 13:58:15.650224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:32.983  [2024-12-11 13:58:15.650238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:33.549      13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:33.549      13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:33.549      13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:33.549       13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:33.549       13:58:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:33.549       13:58:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:33.549       13:58:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:21:33.549     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:21:33.807     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:21:33.807     13:58:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:21:40.370      13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:21:40.370      13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:40.370      13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:40.370       13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:40.370       13:58:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:40.370       13:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:40.370       13:58:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:21:40.370  [2024-12-11 13:58:22.447752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:21:40.370  [2024-12-11 13:58:22.450328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:40.370  [2024-12-11 13:58:22.450383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:40.370  [2024-12-11 13:58:22.450403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:40.370  [2024-12-11 13:58:22.450430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:40.370  [2024-12-11 13:58:22.450445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:40.370  [2024-12-11 13:58:22.450466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:40.370  [2024-12-11 13:58:22.450482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:40.370  [2024-12-11 13:58:22.450498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:40.370  [2024-12-11 13:58:22.450512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:40.370  [2024-12-11 13:58:22.450530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:40.370  [2024-12-11 13:58:22.450543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:40.370  [2024-12-11 13:58:22.450559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:40.370      13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:40.370      13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:40.370      13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:40.370       13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:40.370       13:58:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:40.370       13:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:40.370       13:58:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:21:40.370     13:58:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:21:46.940      13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:21:46.940      13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:46.940      13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:46.940       13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:46.940       13:58:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.940       13:58:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:46.940       13:58:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:46.940      13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:46.940      13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:46.940      13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:46.940       13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:46.940       13:58:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.940       13:58:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:46.940       13:58:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:21:46.940     13:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:21:46.940  [2024-12-11 13:58:28.847852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:21:46.940  [2024-12-11 13:58:28.850113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:46.940  [2024-12-11 13:58:28.850159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:46.940  [2024-12-11 13:58:28.850183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:46.940  [2024-12-11 13:58:28.850210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:46.940  [2024-12-11 13:58:28.850228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:46.940  [2024-12-11 13:58:28.850242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:46.940  [2024-12-11 13:58:28.850261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:46.940  [2024-12-11 13:58:28.850285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:46.940  [2024-12-11 13:58:28.850306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:46.940  [2024-12-11 13:58:28.850321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:21:46.940  [2024-12-11 13:58:28.850337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:21:46.940  [2024-12-11 13:58:28.850351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:21:46.940      13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:21:46.940      13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:46.940      13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:46.940       13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:46.940       13:58:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.940       13:58:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:46.940       13:58:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:21:46.940     13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:21:53.507     13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:21:53.507     13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:21:53.507      13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:21:53.507      13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:21:53.507      13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:21:53.507       13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:21:53.507       13:58:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:53.507       13:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:53.507       13:58:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:53.507     13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:21:53.507     13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:21:53.507    13:58:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=26.07
00:21:53.507    13:58:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 26.07
00:21:53.507    13:58:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:21:53.507   13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.07
00:21:53.507   13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.07 1
00:21:53.507  remove_attach_helper took 26.07s to complete (handling 1 nvme drive(s)) 13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:21:53.507   13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 84400
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 84400 ']'
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 84400
00:21:53.507    13:58:35 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:53.507    13:58:35 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84400
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:53.507  killing process with pid 84400
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84400'
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@973 -- # kill 84400
00:21:53.507   13:58:35 sw_hotplug -- common/autotest_common.sh@978 -- # wait 84400
00:21:56.042   13:58:38 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:21:56.042  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:21:56.042  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:56.983  
00:21:56.983  real	1m32.048s
00:21:56.983  user	1m5.612s
00:21:56.983  sys	0m17.481s
00:21:56.983   13:58:39 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:56.983   13:58:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:21:56.983  ************************************
00:21:56.983  END TEST sw_hotplug
00:21:56.983  ************************************
00:21:56.983   13:58:39  -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]]
00:21:56.983   13:58:39  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:21:56.983   13:58:39  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@260 -- # timing_exit lib
00:21:56.983   13:58:39  -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:56.983   13:58:39  -- common/autotest_common.sh@10 -- # set +x
00:21:56.983   13:58:39  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:21:56.983   13:58:39  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:21:56.983   13:58:39  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:21:56.983   13:58:39  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:21:56.983   13:58:39  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:21:56.983   13:58:39  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:21:56.983   13:58:39  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:21:56.983   13:58:39  -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:56.983   13:58:39  -- common/autotest_common.sh@10 -- # set +x
00:21:56.983   13:58:39  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:21:56.983   13:58:39  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:21:56.983   13:58:39  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:21:56.983   13:58:39  -- common/autotest_common.sh@10 -- # set +x
00:21:59.517  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:21:59.517  Waiting for block devices as requested
00:21:59.517  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:22:00.084  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:22:00.084  Cleaning
00:22:00.084  Removing:    /var/run/dpdk/spdk0/config
00:22:00.084  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:22:00.084  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:22:00.084  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:22:00.084  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:22:00.084  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:22:00.084  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:22:00.084  Removing:    /dev/shm/spdk_tgt_trace.pid68486
00:22:00.084  Removing:    /var/run/dpdk/spdk0
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid68234
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid68486
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid68715
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid68830
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid68891
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69025
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69049
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69219
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69481
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69668
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69785
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid69903
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70036
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70155
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70195
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70236
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70302
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70414
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70909
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid70990
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71069
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71091
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71239
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71261
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71420
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71441
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71511
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71534
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71599
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71617
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71818
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71854
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71891
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid71975
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid72165
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid72265
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid72325
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid73545
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid73761
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid73963
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74089
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74215
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74290
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74319
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74347
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74773
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74856
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid74968
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid75032
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid75128
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid75349
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid75405
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid75464
00:22:00.084  Removing:    /var/run/dpdk/spdk_pid75529
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid75682
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid75836
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76070
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76355
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76379
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76422
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76452
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76483
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76519
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76549
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76580
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76616
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76646
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76677
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76718
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76743
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76774
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76804
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76834
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76860
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76894
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76920
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76951
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid76987
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77023
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77068
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77151
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77195
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77222
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77274
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77301
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77326
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77383
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77408
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77452
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77477
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77502
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77527
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77548
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77572
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77597
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77621
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77661
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77700
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77728
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77770
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77797
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77821
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77876
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77905
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77948
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77969
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid77988
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78008
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78033
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78058
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78082
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78103
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78204
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78303
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78470
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78497
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78545
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78597
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78629
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78654
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78682
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78729
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78761
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78855
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78912
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid78960
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79215
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79333
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79373
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79409
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79460
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79511
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79562
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79610
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79728
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79810
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid79852
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid80104
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid80199
00:22:00.343  Removing:    /var/run/dpdk/spdk_pid80303
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid80351
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid80382
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid80466
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid80870
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid80918
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid81220
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid81320
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid81414
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid81472
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid81497
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid81528
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid82768
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid82909
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid82913
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid82935
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid83401
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid83520
00:22:00.602  Removing:    /var/run/dpdk/spdk_pid84400
00:22:00.602  Clean
00:22:00.602   13:58:43  -- common/autotest_common.sh@1453 -- # return 0
00:22:00.602   13:58:43  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:22:00.602   13:58:43  -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:00.602   13:58:43  -- common/autotest_common.sh@10 -- # set +x
00:22:00.602   13:58:43  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:22:00.602   13:58:43  -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:00.602   13:58:43  -- common/autotest_common.sh@10 -- # set +x
00:22:00.602   13:58:43  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:22:00.602   13:58:43  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:22:00.602   13:58:43  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:22:00.602   13:58:43  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:22:00.602    13:58:43  -- spdk/autotest.sh@398 -- # hostname
00:22:00.602   13:58:43  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:22:00.860  geninfo: WARNING: invalid characters removed from testname!
00:23:08.574   13:59:42  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:23:08.574   13:59:48  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:23:08.831   13:59:51  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:23:12.119   13:59:54  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:23:15.405   13:59:57  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:23:17.939   14:00:00  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:23:20.473   14:00:03  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:23:20.473   14:00:03  -- spdk/autorun.sh@1 -- $ timing_finish
00:23:20.473   14:00:03  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:23:20.473   14:00:03  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:23:20.473   14:00:03  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:23:20.473   14:00:03  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:23:20.473  + [[ -n 2382 ]]
00:23:20.473  + sudo kill 2382
00:23:20.483  [Pipeline] }
00:23:20.498  [Pipeline] // timeout
00:23:20.503  [Pipeline] }
00:23:20.517  [Pipeline] // stage
00:23:20.522  [Pipeline] }
00:23:20.536  [Pipeline] // catchError
00:23:20.544  [Pipeline] stage
00:23:20.546  [Pipeline] { (Stop VM)
00:23:20.558  [Pipeline] sh
00:23:20.840  + vagrant halt
00:23:25.031  ==> default: Halting domain...
00:23:35.022  [Pipeline] sh
00:23:35.304  + vagrant destroy -f
00:23:38.600  ==> default: Removing domain...
00:23:38.612  [Pipeline] sh
00:23:38.895  + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output
00:23:38.904  [Pipeline] }
00:23:38.919  [Pipeline] // stage
00:23:38.924  [Pipeline] }
00:23:38.938  [Pipeline] // dir
00:23:38.943  [Pipeline] }
00:23:38.957  [Pipeline] // wrap
00:23:38.963  [Pipeline] }
00:23:38.975  [Pipeline] // catchError
00:23:38.984  [Pipeline] stage
00:23:38.986  [Pipeline] { (Epilogue)
00:23:38.998  [Pipeline] sh
00:23:39.280  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:23:57.379  [Pipeline] catchError
00:23:57.380  [Pipeline] {
00:23:57.392  [Pipeline] sh
00:23:57.673  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:23:57.931  Artifacts sizes are good
00:23:57.940  [Pipeline] }
00:23:57.954  [Pipeline] // catchError
00:23:57.964  [Pipeline] archiveArtifacts
00:23:57.971  Archiving artifacts
00:23:58.250  [Pipeline] cleanWs
00:23:58.258  [WS-CLEANUP] Deleting project workspace...
00:23:58.258  [WS-CLEANUP] Deferred wipeout is used...
00:23:58.264  [WS-CLEANUP] done
00:23:58.265  [Pipeline] }
00:23:58.277  [Pipeline] // stage
00:23:58.281  [Pipeline] }
00:23:58.294  [Pipeline] // node
00:23:58.297  [Pipeline] End of Pipeline
00:23:58.337  Finished: SUCCESS