00:00:00.001  Started by upstream project "autotest-per-patch" build number 132356
00:00:00.001  originally caused by:
00:00:00.001   Started by user sys_sgci
00:00:00.173  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-cmb-pmr-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.173  The recommended git tool is: git
00:00:00.174  using credential 00000000-0000-0000-0000-000000000002
00:00:00.175   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-cmb-pmr-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.207  Fetching changes from the remote Git repository
00:00:00.210   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.242  Using shallow fetch with depth 1
00:00:00.242  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.242   > git --version # timeout=10
00:00:00.261   > git --version # 'git version 2.39.2'
00:00:00.261  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.279  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.279   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:06.289   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:06.301   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:06.312  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:06.312   > git config core.sparsecheckout # timeout=10
00:00:06.322   > git read-tree -mu HEAD # timeout=10
00:00:06.338   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:06.358  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:06.358   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:06.453  [Pipeline] Start of Pipeline
00:00:06.463  [Pipeline] library
00:00:06.464  Loading library shm_lib@master
00:00:06.464  Library shm_lib@master is cached. Copying from home.
00:00:06.477  [Pipeline] node
00:00:06.489  Running on VM-host-SM16 in /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest
00:00:06.490  [Pipeline] {
00:00:06.498  [Pipeline] catchError
00:00:06.500  [Pipeline] {
00:00:06.510  [Pipeline] wrap
00:00:06.517  [Pipeline] {
00:00:06.523  [Pipeline] stage
00:00:06.525  [Pipeline] { (Prologue)
00:00:06.542  [Pipeline] echo
00:00:06.543  Node: VM-host-SM16
00:00:06.550  [Pipeline] cleanWs
00:00:06.559  [WS-CLEANUP] Deleting project workspace...
00:00:06.559  [WS-CLEANUP] Deferred wipeout is used...
00:00:06.566  [WS-CLEANUP] done
00:00:06.794  [Pipeline] setCustomBuildProperty
00:00:06.884  [Pipeline] httpRequest
00:00:07.779  [Pipeline] echo
00:00:07.781  Sorcerer 10.211.164.20 is alive
00:00:07.790  [Pipeline] retry
00:00:07.792  [Pipeline] {
00:00:07.806  [Pipeline] httpRequest
00:00:07.810  HttpMethod: GET
00:00:07.811  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:07.812  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:07.812  Response Code: HTTP/1.1 200 OK
00:00:07.813  Success: Status code 200 is in the accepted range: 200,404
00:00:07.814  Saving response body to /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.759  [Pipeline] }
00:00:08.776  [Pipeline] // retry
00:00:08.781  [Pipeline] sh
00:00:09.057  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:09.071  [Pipeline] httpRequest
00:00:10.334  [Pipeline] echo
00:00:10.336  Sorcerer 10.211.164.20 is alive
00:00:10.345  [Pipeline] retry
00:00:10.347  [Pipeline] {
00:00:10.362  [Pipeline] httpRequest
00:00:10.366  HttpMethod: GET
00:00:10.367  URL: http://10.211.164.20/packages/spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz
00:00:10.368  Sending request to url: http://10.211.164.20/packages/spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz
00:00:10.389  Response Code: HTTP/1.1 200 OK
00:00:10.390  Success: Status code 200 is in the accepted range: 200,404
00:00:10.391  Saving response body to /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz
00:01:10.647  [Pipeline] }
00:01:10.668  [Pipeline] // retry
00:01:10.676  [Pipeline] sh
00:01:10.956  + tar --no-same-owner -xf spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz
00:01:14.247  [Pipeline] sh
00:01:14.524  + git -C spdk log --oneline -n5
00:01:14.524  1c7c7c64f test/iscsi_tgt: Remove support for the namespace arg
00:01:14.524  4c583db59 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP
00:01:14.524  c788bae60 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy
00:01:14.524  e4689ab38 test/nvmf: Remove all transport conditions from the test suites
00:01:14.524  097b7c969 test/nvmf: Drop $RDMA_IP_LIST
00:01:14.542  [Pipeline] writeFile
00:01:14.555  [Pipeline] sh
00:01:14.830  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:14.843  [Pipeline] sh
00:01:15.127  + cat autorun-spdk.conf
00:01:15.127  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:15.127  SPDK_TEST_NVME=1
00:01:15.127  SPDK_TEST_NVME_CMB=1
00:01:15.127  SPDK_TEST_NVME_PMR=1
00:01:15.127  SPDK_TEST_NO_MULTI=1
00:01:15.127  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:15.133  RUN_NIGHTLY=0
00:01:15.135  [Pipeline] }
00:01:15.146  [Pipeline] // stage
00:01:15.185  [Pipeline] stage
00:01:15.187  [Pipeline] { (Run VM)
00:01:15.195  [Pipeline] sh
00:01:15.473  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:15.474  + echo 'Start stage prepare_nvme.sh'
00:01:15.474  Start stage prepare_nvme.sh
00:01:15.474  + [[ -n 3 ]]
00:01:15.474  + disk_prefix=ex3
00:01:15.474  + [[ -n /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest ]]
00:01:15.474  + [[ -e /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/autorun-spdk.conf ]]
00:01:15.474  + source /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/autorun-spdk.conf
00:01:15.474  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:15.474  ++ SPDK_TEST_NVME=1
00:01:15.474  ++ SPDK_TEST_NVME_CMB=1
00:01:15.474  ++ SPDK_TEST_NVME_PMR=1
00:01:15.474  ++ SPDK_TEST_NO_MULTI=1
00:01:15.474  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:15.474  ++ RUN_NIGHTLY=0
00:01:15.474  + cd /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest
00:01:15.474  + nvme_files=()
00:01:15.474  + declare -A nvme_files
00:01:15.474  + backend_dir=/var/lib/libvirt/images/backends
00:01:15.474  + nvme_files['nvme.img']=5G
00:01:15.474  + nvme_files['nvme-cmb.img']=5G
00:01:15.474  + nvme_files['nvme-multi0.img']=4G
00:01:15.474  + nvme_files['nvme-multi1.img']=4G
00:01:15.474  + nvme_files['nvme-multi2.img']=4G
00:01:15.474  + nvme_files['nvme-openstack.img']=8G
00:01:15.474  + nvme_files['nvme-zns.img']=5G
00:01:15.474  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:15.474  + nvme_files['nvme.img.pmr']=32M
00:01:15.474  + nvme_files['nvme-cmb.img.pmr']=32M
00:01:15.474  + ((  SPDK_TEST_FTL == 1  ))
00:01:15.474  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:15.474  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img.pmr -s 32M
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img.pmr', fmt=raw size=33554432 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img.pmr -s 32M
00:01:15.474  Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img.pmr', fmt=raw size=33554432 preallocation=falloc
00:01:15.474  + for nvme in "${!nvme_files[@]}"
00:01:15.474  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G
00:01:15.732  Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:15.732  ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu
00:01:15.732  + echo 'End stage prepare_nvme.sh'
00:01:15.732  End stage prepare_nvme.sh
00:01:15.743  [Pipeline] sh
00:01:16.021  + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:16.022  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img,nvme,,true,/var/lib/libvirt/images/backends/ex3-nvme.img.pmr:32M -b /var/lib/libvirt/images/backends/ex3-nvme-cmb.img,nvme,,true,/var/lib/libvirt/images/backends/ex3-nvme-cmb.img.pmr:32M -H -a -v -f fedora39
00:01:16.022  
00:01:16.022  DIR=/var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/spdk/scripts/vagrant
00:01:16.022  SPDK_DIR=/var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/spdk
00:01:16.022  VAGRANT_TARGET=/var/jenkins/workspace/nvme-cmb-pmr-vg-autotest
00:01:16.022  HELP=0
00:01:16.022  DRY_RUN=0
00:01:16.022  NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-cmb.img,
00:01:16.022  NVME_DISKS_TYPE=nvme,nvme,
00:01:16.022  NVME_AUTO_CREATE=0
00:01:16.022  NVME_DISKS_NAMESPACES=,,
00:01:16.022  NVME_CMB=true,true,
00:01:16.022  NVME_PMR=/var/lib/libvirt/images/backends/ex3-nvme.img.pmr:32M,/var/lib/libvirt/images/backends/ex3-nvme-cmb.img.pmr:32M,
00:01:16.022  NVME_ZNS=,,
00:01:16.022  NVME_MS=,,
00:01:16.022  NVME_FDP=,,
00:01:16.022  SPDK_VAGRANT_DISTRO=fedora39
00:01:16.022  SPDK_VAGRANT_VMCPU=10
00:01:16.022  SPDK_VAGRANT_VMRAM=12288
00:01:16.022  SPDK_VAGRANT_PROVIDER=libvirt
00:01:16.022  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:01:16.022  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:16.022  SPDK_OPENSTACK_NETWORK=0
00:01:16.022  VAGRANT_PACKAGE_BOX=0
00:01:16.022  VAGRANTFILE=/var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:16.022  FORCE_DISTRO=true
00:01:16.022  VAGRANT_BOX_VERSION=
00:01:16.022  EXTRA_VAGRANTFILES=
00:01:16.022  NIC_MODEL=e1000
00:01:16.022  
00:01:16.022  mkdir: created directory '/var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/fedora39-libvirt'
00:01:16.022  /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest
00:01:19.309  Bringing machine 'default' up with 'libvirt' provider...
00:01:19.878  ==> default: Creating image (snapshot of base box volume).
00:01:19.878  ==> default: Creating domain with the following settings...
00:01:19.878  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1732088735_280d709723d49c2e9be1
00:01:19.878  ==> default:  -- Domain type:       kvm
00:01:19.878  ==> default:  -- Cpus:              10
00:01:19.878  ==> default:  -- Feature:           acpi
00:01:19.878  ==> default:  -- Feature:           apic
00:01:19.878  ==> default:  -- Feature:           pae
00:01:19.878  ==> default:  -- Memory:            12288M
00:01:19.878  ==> default:  -- Memory Backing:    hugepages: 
00:01:19.878  ==> default:  -- Management MAC:    
00:01:19.878  ==> default:  -- Loader:            
00:01:19.878  ==> default:  -- Nvram:             
00:01:19.878  ==> default:  -- Base box:          spdk/fedora39
00:01:19.878  ==> default:  -- Storage pool:      default
00:01:19.878  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732088735_280d709723d49c2e9be1.img (20G)
00:01:19.878  ==> default:  -- Volume Cache:      default
00:01:19.878  ==> default:  -- Kernel:            
00:01:19.878  ==> default:  -- Initrd:            
00:01:19.878  ==> default:  -- Graphics Type:     vnc
00:01:19.878  ==> default:  -- Graphics Port:     -1
00:01:19.878  ==> default:  -- Graphics IP:       127.0.0.1
00:01:19.878  ==> default:  -- Graphics Password: Not defined
00:01:19.878  ==> default:  -- Video Type:        cirrus
00:01:19.878  ==> default:  -- Video VRAM:        9216
00:01:19.878  ==> default:  -- Sound Type:	
00:01:19.878  ==> default:  -- Keymap:            en-us
00:01:19.878  ==> default:  -- TPM Path:          
00:01:19.878  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:19.878  ==> default:  -- Command line args: 
00:01:19.878  ==> default:     -> value=-device, 
00:01:19.878  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10,cmb_size_mb=128,pmrdev=pmr0, 
00:01:19.878  ==> default:     -> value=-object, 
00:01:19.878  ==> default:     -> value=memory-backend-file,id=pmr0,share=on,mem-path=/var/lib/libvirt/images/backends/ex3-nvme.img.pmr,size=32M, 
00:01:19.878  ==> default:     -> value=-drive, 
00:01:19.878  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 
00:01:19.878  ==> default:     -> value=-device, 
00:01:19.878  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:19.878  ==> default:     -> value=-device, 
00:01:19.878  ==> default:     -> value=nvme,id=nvme-1,serial=12341,addr=0x11,cmb_size_mb=128,pmrdev=pmr1, 
00:01:19.878  ==> default:     -> value=-object, 
00:01:19.878  ==> default:     -> value=memory-backend-file,id=pmr1,share=on,mem-path=/var/lib/libvirt/images/backends/ex3-nvme-cmb.img.pmr,size=32M, 
00:01:19.878  ==> default:     -> value=-drive, 
00:01:19.878  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-cmb.img,if=none,id=nvme-1-drive0, 
00:01:19.878  ==> default:     -> value=-device, 
00:01:19.878  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:20.146  ==> default: Creating shared folders metadata...
00:01:20.146  ==> default: Starting domain.
00:01:22.054  ==> default: Waiting for domain to get an IP address...
00:01:36.930  ==> default: Waiting for SSH to become available...
00:01:37.896  ==> default: Configuring and enabling network interfaces...
00:01:43.164      default: SSH address: 192.168.121.220:22
00:01:43.164      default: SSH username: vagrant
00:01:43.164      default: SSH auth method: private key
00:01:45.073  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-cmb-pmr-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:01:53.188  ==> default: Mounting SSHFS shared folder...
00:01:55.100  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-cmb-pmr-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:01:55.100  ==> default: Checking Mount..
00:01:56.479  ==> default: Folder Successfully Mounted!
00:01:56.479  ==> default: Running provisioner: file...
00:01:57.046      default: ~/.gitconfig => .gitconfig
00:01:57.613  
00:01:57.613    SUCCESS!
00:01:57.613  
00:01:57.613    cd to /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use.
00:01:57.613    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:57.613    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/fedora39-libvirt" to destroy all trace of vm.
00:01:57.613  
00:01:57.622  [Pipeline] }
00:01:57.640  [Pipeline] // stage
00:01:57.650  [Pipeline] dir
00:01:57.650  Running in /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/fedora39-libvirt
00:01:57.653  [Pipeline] {
00:01:57.666  [Pipeline] catchError
00:01:57.668  [Pipeline] {
00:01:57.682  [Pipeline] sh
00:01:57.965  + vagrant ssh-config --host vagrant
00:01:57.965  + sed -ne /^Host/,$p
00:01:57.965  + tee ssh_conf
00:02:02.153  Host vagrant
00:02:02.153    HostName 192.168.121.220
00:02:02.153    User vagrant
00:02:02.153    Port 22
00:02:02.153    UserKnownHostsFile /dev/null
00:02:02.153    StrictHostKeyChecking no
00:02:02.153    PasswordAuthentication no
00:02:02.153    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:02:02.153    IdentitiesOnly yes
00:02:02.153    LogLevel FATAL
00:02:02.153    ForwardAgent yes
00:02:02.153    ForwardX11 yes
00:02:02.153  
00:02:02.168  [Pipeline] withEnv
00:02:02.171  [Pipeline] {
00:02:02.183  [Pipeline] sh
00:02:02.461  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:02:02.461  		source /etc/os-release
00:02:02.461  		[[ -e /image.version ]] && img=$(< /image.version)
00:02:02.461  		# Minimal, systemd-like check.
00:02:02.461  		if [[ -e /.dockerenv ]]; then
00:02:02.461  			# Clear garbage from the node's name:
00:02:02.461  			#  agt-er_autotest_547-896 -> autotest_547-896
00:02:02.461  			#  $HOSTNAME is the actual container id
00:02:02.461  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:02:02.461  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:02:02.461  				# We can assume this is a mount from a host where container is running,
00:02:02.461  				# so fetch its hostname to easily identify the target swarm worker.
00:02:02.462  				container="$(< /etc/hostname) ($agent)"
00:02:02.462  			else
00:02:02.462  				# Fallback
00:02:02.462  				container=$agent
00:02:02.462  			fi
00:02:02.462  		fi
00:02:02.462  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:02:02.462  
00:02:02.733  [Pipeline] }
00:02:02.753  [Pipeline] // withEnv
00:02:02.763  [Pipeline] setCustomBuildProperty
00:02:02.781  [Pipeline] stage
00:02:02.784  [Pipeline] { (Tests)
00:02:02.801  [Pipeline] sh
00:02:03.119  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:02:03.388  [Pipeline] sh
00:02:03.665  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:02:03.940  [Pipeline] timeout
00:02:03.940  Timeout set to expire in 1 hr 30 min
00:02:03.942  [Pipeline] {
00:02:03.957  [Pipeline] sh
00:02:04.237  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:02:04.804  HEAD is now at 1c7c7c64f test/iscsi_tgt: Remove support for the namespace arg
00:02:04.815  [Pipeline] sh
00:02:05.095  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:02:05.367  [Pipeline] sh
00:02:05.647  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:02:05.922  [Pipeline] sh
00:02:06.203  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-cmb-pmr-vg-autotest ./autoruner.sh spdk_repo
00:02:06.528  ++ readlink -f spdk_repo
00:02:06.528  + DIR_ROOT=/home/vagrant/spdk_repo
00:02:06.528  + [[ -n /home/vagrant/spdk_repo ]]
00:02:06.528  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:02:06.528  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:02:06.528  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:02:06.528  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:02:06.528  + [[ -d /home/vagrant/spdk_repo/output ]]
00:02:06.528  + [[ nvme-cmb-pmr-vg-autotest == pkgdep-* ]]
00:02:06.528  + cd /home/vagrant/spdk_repo
00:02:06.528  + source /etc/os-release
00:02:06.528  ++ NAME='Fedora Linux'
00:02:06.528  ++ VERSION='39 (Cloud Edition)'
00:02:06.528  ++ ID=fedora
00:02:06.528  ++ VERSION_ID=39
00:02:06.528  ++ VERSION_CODENAME=
00:02:06.528  ++ PLATFORM_ID=platform:f39
00:02:06.528  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:02:06.528  ++ ANSI_COLOR='0;38;2;60;110;180'
00:02:06.528  ++ LOGO=fedora-logo-icon
00:02:06.528  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:02:06.528  ++ HOME_URL=https://fedoraproject.org/
00:02:06.528  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:02:06.528  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:02:06.528  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:02:06.528  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:02:06.528  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:02:06.528  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:02:06.528  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:02:06.528  ++ SUPPORT_END=2024-11-12
00:02:06.528  ++ VARIANT='Cloud Edition'
00:02:06.528  ++ VARIANT_ID=cloud
00:02:06.528  + uname -a
00:02:06.528  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:02:06.528  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:02:06.788  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:02:06.788  Hugepages
00:02:06.788  node     hugesize     free /  total
00:02:06.788  node0   1048576kB        0 /      0
00:02:06.788  node0      2048kB        0 /      0
00:02:06.788  
00:02:06.788  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:06.788  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:02:06.788  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:02:06.788  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:02:06.788  + rm -f /tmp/spdk-ld-path
00:02:06.788  + source autorun-spdk.conf
00:02:06.788  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:06.788  ++ SPDK_TEST_NVME=1
00:02:06.788  ++ SPDK_TEST_NVME_CMB=1
00:02:06.788  ++ SPDK_TEST_NVME_PMR=1
00:02:06.788  ++ SPDK_TEST_NO_MULTI=1
00:02:06.788  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:06.788  ++ RUN_NIGHTLY=0
00:02:06.788  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:06.788  + sudo /home/vagrant/spdk_repo/spdk/scripts/get-pmr
00:02:06.788  0000:00:10.0:64-bit:prefetchable:0xe8000000:0xefffffff:0x08000000:cmb
00:02:06.788  0000:00:10.0:64-bit:prefetchable:0xfa000000:0xfbffffff:0x02000000:pmr
00:02:07.048  0000:00:11.0:64-bit:prefetchable:0xf0000000:0xf7ffffff:0x08000000:cmb
00:02:07.048  0000:00:11.0:64-bit:prefetchable:0xfc000000:0xfdffffff:0x02000000:pmr
00:02:07.048  + [[ -n '' ]]
00:02:07.048  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:02:07.048  + for M in /var/spdk/build-*-manifest.txt
00:02:07.048  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:02:07.048  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:02:07.048  + for M in /var/spdk/build-*-manifest.txt
00:02:07.048  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:07.048  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:02:07.048  + for M in /var/spdk/build-*-manifest.txt
00:02:07.048  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:07.048  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:02:07.048  ++ uname
00:02:07.048  + [[ Linux == \L\i\n\u\x ]]
00:02:07.048  + sudo dmesg -T
00:02:07.048  + sudo dmesg --clear
00:02:07.048  + dmesg_pid=5404
00:02:07.048  + [[ Fedora Linux == FreeBSD ]]
00:02:07.048  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:07.048  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:07.048  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:07.048  + [[ -x /usr/src/fio-static/fio ]]
00:02:07.048  + sudo dmesg -Tw
00:02:07.048  + export FIO_BIN=/usr/src/fio-static/fio
00:02:07.048  + FIO_BIN=/usr/src/fio-static/fio
00:02:07.048  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:07.048  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:07.048  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:07.048  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:07.048  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:07.048  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:07.048  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:07.048  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:07.048  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:07.048    07:46:23  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:07.048   07:46:23  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CMB=1
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_NVME_PMR=1
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NO_MULTI=1
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:07.048    07:46:23  -- spdk_repo/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0
00:02:07.048   07:46:23  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:02:07.048   07:46:23  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:07.307     07:46:23  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:07.307    07:46:23  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:02:07.307     07:46:23  -- scripts/common.sh@15 -- $ shopt -s extglob
00:02:07.307     07:46:23  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:07.307     07:46:23  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:07.307     07:46:23  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:07.307      07:46:23  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:07.307      07:46:23  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:07.307      07:46:23  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:07.307      07:46:23  -- paths/export.sh@5 -- $ export PATH
00:02:07.307      07:46:23  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:07.307    07:46:23  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:02:07.307      07:46:23  -- common/autobuild_common.sh@493 -- $ date +%s
00:02:07.307     07:46:23  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732088783.XXXXXX
00:02:07.307    07:46:23  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732088783.aBUmOG
00:02:07.307    07:46:23  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:02:07.307    07:46:23  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:02:07.307    07:46:23  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:02:07.307    07:46:23  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:02:07.307    07:46:23  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:02:07.307     07:46:23  -- common/autobuild_common.sh@509 -- $ get_config_params
00:02:07.307     07:46:23  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:02:07.307     07:46:23  -- common/autotest_common.sh@10 -- $ set +x
00:02:07.307    07:46:23  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-coverage --with-ublk'
00:02:07.307    07:46:23  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:02:07.307    07:46:23  -- pm/common@17 -- $ local monitor
00:02:07.307    07:46:23  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:07.307    07:46:23  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:07.307     07:46:23  -- pm/common@21 -- $ date +%s
00:02:07.307    07:46:23  -- pm/common@25 -- $ sleep 1
00:02:07.307     07:46:23  -- pm/common@21 -- $ date +%s
00:02:07.307    07:46:23  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732088783
00:02:07.307    07:46:23  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732088783
00:02:07.307  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732088783_collect-vmstat.pm.log
00:02:07.307  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732088783_collect-cpu-load.pm.log
00:02:08.244    07:46:24  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:02:08.244   07:46:24  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:08.244   07:46:24  -- spdk/autobuild.sh@12 -- $ umask 022
00:02:08.244   07:46:24  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:08.244   07:46:24  -- spdk/autobuild.sh@16 -- $ date -u
00:02:08.244  Wed Nov 20 07:46:24 AM UTC 2024
00:02:08.244   07:46:24  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:08.244  v25.01-pre-207-g1c7c7c64f
00:02:08.244   07:46:24  -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:02:08.244   07:46:24  -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']'
00:02:08.244   07:46:24  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:02:08.244   07:46:24  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:08.244   07:46:24  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:08.244   07:46:24  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:08.244   07:46:24  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:08.244   07:46:24  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:02:08.244   07:46:24  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:02:08.244   07:46:24  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:02:08.244   07:46:24  -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-coverage --with-ublk --with-shared
00:02:08.244  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:02:08.244  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:08.810  Using 'verbs' RDMA provider
00:02:21.945  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:02:36.854  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:02:36.854  Creating mk/config.mk...done.
00:02:36.854  Creating mk/cc.flags.mk...done.
00:02:36.854  Type 'make' to build.
00:02:36.854   07:46:51  -- spdk/autobuild.sh@70 -- $ run_test make make -j10
00:02:36.854   07:46:51  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:36.854   07:46:51  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:36.854   07:46:51  -- common/autotest_common.sh@10 -- $ set +x
00:02:36.854  ************************************
00:02:36.854  START TEST make
00:02:36.854  ************************************
00:02:36.854   07:46:51 make -- common/autotest_common.sh@1129 -- $ make -j10
00:02:36.854  make[1]: Nothing to be done for 'all'.
00:02:49.120  The Meson build system
00:02:49.120  Version: 1.5.0
00:02:49.120  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:02:49.120  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:02:49.120  Build type: native build
00:02:49.120  Program cat found: YES (/usr/bin/cat)
00:02:49.120  Project name: DPDK
00:02:49.120  Project version: 24.03.0
00:02:49.120  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:49.120  C linker for the host machine: cc ld.bfd 2.40-14
00:02:49.120  Host machine cpu family: x86_64
00:02:49.120  Host machine cpu: x86_64
00:02:49.120  Message: ## Building in Developer Mode ##
00:02:49.120  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:49.120  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:02:49.120  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:49.120  Program python3 found: YES (/usr/bin/python3)
00:02:49.120  Program cat found: YES (/usr/bin/cat)
00:02:49.120  Compiler for C supports arguments -march=native: YES 
00:02:49.120  Checking for size of "void *" : 8 
00:02:49.120  Checking for size of "void *" : 8 (cached)
00:02:49.120  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:02:49.120  Library m found: YES
00:02:49.120  Library numa found: YES
00:02:49.120  Has header "numaif.h" : YES 
00:02:49.120  Library fdt found: NO
00:02:49.120  Library execinfo found: NO
00:02:49.120  Has header "execinfo.h" : YES 
00:02:49.120  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:49.120  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:49.120  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:49.120  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:49.120  Run-time dependency openssl found: YES 3.1.1
00:02:49.120  Run-time dependency libpcap found: YES 1.10.4
00:02:49.120  Has header "pcap.h" with dependency libpcap: YES 
00:02:49.120  Compiler for C supports arguments -Wcast-qual: YES 
00:02:49.120  Compiler for C supports arguments -Wdeprecated: YES 
00:02:49.120  Compiler for C supports arguments -Wformat: YES 
00:02:49.120  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:02:49.120  Compiler for C supports arguments -Wformat-security: NO 
00:02:49.120  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:49.120  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:49.120  Compiler for C supports arguments -Wnested-externs: YES 
00:02:49.120  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:49.120  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:49.120  Compiler for C supports arguments -Wsign-compare: YES 
00:02:49.120  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:49.120  Compiler for C supports arguments -Wundef: YES 
00:02:49.120  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:49.120  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:49.120  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:49.120  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:49.120  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:49.120  Program objdump found: YES (/usr/bin/objdump)
00:02:49.120  Compiler for C supports arguments -mavx512f: YES 
00:02:49.120  Checking if "AVX512 checking" compiles: YES 
00:02:49.120  Fetching value of define "__SSE4_2__" : 1 
00:02:49.120  Fetching value of define "__AES__" : 1 
00:02:49.120  Fetching value of define "__AVX__" : 1 
00:02:49.120  Fetching value of define "__AVX2__" : 1 
00:02:49.120  Fetching value of define "__AVX512BW__" : (undefined) 
00:02:49.120  Fetching value of define "__AVX512CD__" : (undefined) 
00:02:49.120  Fetching value of define "__AVX512DQ__" : (undefined) 
00:02:49.120  Fetching value of define "__AVX512F__" : (undefined) 
00:02:49.120  Fetching value of define "__AVX512VL__" : (undefined) 
00:02:49.120  Fetching value of define "__PCLMUL__" : 1 
00:02:49.120  Fetching value of define "__RDRND__" : 1 
00:02:49.120  Fetching value of define "__RDSEED__" : 1 
00:02:49.120  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:02:49.120  Fetching value of define "__znver1__" : (undefined) 
00:02:49.120  Fetching value of define "__znver2__" : (undefined) 
00:02:49.120  Fetching value of define "__znver3__" : (undefined) 
00:02:49.120  Fetching value of define "__znver4__" : (undefined) 
00:02:49.120  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:49.120  Message: lib/log: Defining dependency "log"
00:02:49.120  Message: lib/kvargs: Defining dependency "kvargs"
00:02:49.120  Message: lib/telemetry: Defining dependency "telemetry"
00:02:49.120  Checking for function "getentropy" : NO 
00:02:49.120  Message: lib/eal: Defining dependency "eal"
00:02:49.120  Message: lib/ring: Defining dependency "ring"
00:02:49.120  Message: lib/rcu: Defining dependency "rcu"
00:02:49.120  Message: lib/mempool: Defining dependency "mempool"
00:02:49.120  Message: lib/mbuf: Defining dependency "mbuf"
00:02:49.120  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:49.120  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:02:49.120  Compiler for C supports arguments -mpclmul: YES 
00:02:49.120  Compiler for C supports arguments -maes: YES 
00:02:49.120  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:49.120  Compiler for C supports arguments -mavx512bw: YES 
00:02:49.120  Compiler for C supports arguments -mavx512dq: YES 
00:02:49.120  Compiler for C supports arguments -mavx512vl: YES 
00:02:49.120  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:49.120  Compiler for C supports arguments -mavx2: YES 
00:02:49.121  Compiler for C supports arguments -mavx: YES 
00:02:49.121  Message: lib/net: Defining dependency "net"
00:02:49.121  Message: lib/meter: Defining dependency "meter"
00:02:49.121  Message: lib/ethdev: Defining dependency "ethdev"
00:02:49.121  Message: lib/pci: Defining dependency "pci"
00:02:49.121  Message: lib/cmdline: Defining dependency "cmdline"
00:02:49.121  Message: lib/hash: Defining dependency "hash"
00:02:49.121  Message: lib/timer: Defining dependency "timer"
00:02:49.121  Message: lib/compressdev: Defining dependency "compressdev"
00:02:49.121  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:49.121  Message: lib/dmadev: Defining dependency "dmadev"
00:02:49.121  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:49.121  Message: lib/power: Defining dependency "power"
00:02:49.121  Message: lib/reorder: Defining dependency "reorder"
00:02:49.121  Message: lib/security: Defining dependency "security"
00:02:49.121  Has header "linux/userfaultfd.h" : YES 
00:02:49.121  Has header "linux/vduse.h" : YES 
00:02:49.121  Message: lib/vhost: Defining dependency "vhost"
00:02:49.121  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:49.121  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:49.121  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:49.121  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:49.121  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:49.121  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:49.121  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:49.121  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:49.121  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:49.121  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:49.121  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:49.121  Configuring doxy-api-html.conf using configuration
00:02:49.121  Configuring doxy-api-man.conf using configuration
00:02:49.121  Program mandb found: YES (/usr/bin/mandb)
00:02:49.121  Program sphinx-build found: NO
00:02:49.121  Configuring rte_build_config.h using configuration
00:02:49.121  Message: 
00:02:49.121  =================
00:02:49.121  Applications Enabled
00:02:49.121  =================
00:02:49.121  
00:02:49.121  apps:
00:02:49.121  	
00:02:49.121  
00:02:49.121  Message: 
00:02:49.121  =================
00:02:49.121  Libraries Enabled
00:02:49.121  =================
00:02:49.121  
00:02:49.121  libs:
00:02:49.121  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:49.121  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:49.121  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:49.121  
00:02:49.121  Message: 
00:02:49.121  ===============
00:02:49.121  Drivers Enabled
00:02:49.121  ===============
00:02:49.121  
00:02:49.121  common:
00:02:49.121  	
00:02:49.121  bus:
00:02:49.121  	pci, vdev, 
00:02:49.121  mempool:
00:02:49.121  	ring, 
00:02:49.121  dma:
00:02:49.121  	
00:02:49.121  net:
00:02:49.121  	
00:02:49.121  crypto:
00:02:49.121  	
00:02:49.121  compress:
00:02:49.121  	
00:02:49.121  vdpa:
00:02:49.121  	
00:02:49.121  
00:02:49.121  Message: 
00:02:49.121  =================
00:02:49.121  Content Skipped
00:02:49.121  =================
00:02:49.121  
00:02:49.121  apps:
00:02:49.121  	dumpcap:	explicitly disabled via build config
00:02:49.121  	graph:	explicitly disabled via build config
00:02:49.121  	pdump:	explicitly disabled via build config
00:02:49.121  	proc-info:	explicitly disabled via build config
00:02:49.121  	test-acl:	explicitly disabled via build config
00:02:49.121  	test-bbdev:	explicitly disabled via build config
00:02:49.121  	test-cmdline:	explicitly disabled via build config
00:02:49.121  	test-compress-perf:	explicitly disabled via build config
00:02:49.121  	test-crypto-perf:	explicitly disabled via build config
00:02:49.121  	test-dma-perf:	explicitly disabled via build config
00:02:49.121  	test-eventdev:	explicitly disabled via build config
00:02:49.121  	test-fib:	explicitly disabled via build config
00:02:49.121  	test-flow-perf:	explicitly disabled via build config
00:02:49.121  	test-gpudev:	explicitly disabled via build config
00:02:49.121  	test-mldev:	explicitly disabled via build config
00:02:49.121  	test-pipeline:	explicitly disabled via build config
00:02:49.121  	test-pmd:	explicitly disabled via build config
00:02:49.121  	test-regex:	explicitly disabled via build config
00:02:49.121  	test-sad:	explicitly disabled via build config
00:02:49.121  	test-security-perf:	explicitly disabled via build config
00:02:49.121  	
00:02:49.121  libs:
00:02:49.121  	argparse:	explicitly disabled via build config
00:02:49.121  	metrics:	explicitly disabled via build config
00:02:49.121  	acl:	explicitly disabled via build config
00:02:49.121  	bbdev:	explicitly disabled via build config
00:02:49.121  	bitratestats:	explicitly disabled via build config
00:02:49.121  	bpf:	explicitly disabled via build config
00:02:49.121  	cfgfile:	explicitly disabled via build config
00:02:49.121  	distributor:	explicitly disabled via build config
00:02:49.121  	efd:	explicitly disabled via build config
00:02:49.121  	eventdev:	explicitly disabled via build config
00:02:49.121  	dispatcher:	explicitly disabled via build config
00:02:49.121  	gpudev:	explicitly disabled via build config
00:02:49.121  	gro:	explicitly disabled via build config
00:02:49.121  	gso:	explicitly disabled via build config
00:02:49.121  	ip_frag:	explicitly disabled via build config
00:02:49.121  	jobstats:	explicitly disabled via build config
00:02:49.121  	latencystats:	explicitly disabled via build config
00:02:49.121  	lpm:	explicitly disabled via build config
00:02:49.121  	member:	explicitly disabled via build config
00:02:49.121  	pcapng:	explicitly disabled via build config
00:02:49.121  	rawdev:	explicitly disabled via build config
00:02:49.121  	regexdev:	explicitly disabled via build config
00:02:49.121  	mldev:	explicitly disabled via build config
00:02:49.121  	rib:	explicitly disabled via build config
00:02:49.121  	sched:	explicitly disabled via build config
00:02:49.121  	stack:	explicitly disabled via build config
00:02:49.121  	ipsec:	explicitly disabled via build config
00:02:49.121  	pdcp:	explicitly disabled via build config
00:02:49.121  	fib:	explicitly disabled via build config
00:02:49.121  	port:	explicitly disabled via build config
00:02:49.121  	pdump:	explicitly disabled via build config
00:02:49.121  	table:	explicitly disabled via build config
00:02:49.121  	pipeline:	explicitly disabled via build config
00:02:49.121  	graph:	explicitly disabled via build config
00:02:49.121  	node:	explicitly disabled via build config
00:02:49.121  	
00:02:49.121  drivers:
00:02:49.121  	common/cpt:	not in enabled drivers build config
00:02:49.121  	common/dpaax:	not in enabled drivers build config
00:02:49.121  	common/iavf:	not in enabled drivers build config
00:02:49.121  	common/idpf:	not in enabled drivers build config
00:02:49.121  	common/ionic:	not in enabled drivers build config
00:02:49.121  	common/mvep:	not in enabled drivers build config
00:02:49.121  	common/octeontx:	not in enabled drivers build config
00:02:49.121  	bus/auxiliary:	not in enabled drivers build config
00:02:49.121  	bus/cdx:	not in enabled drivers build config
00:02:49.121  	bus/dpaa:	not in enabled drivers build config
00:02:49.121  	bus/fslmc:	not in enabled drivers build config
00:02:49.121  	bus/ifpga:	not in enabled drivers build config
00:02:49.121  	bus/platform:	not in enabled drivers build config
00:02:49.121  	bus/uacce:	not in enabled drivers build config
00:02:49.121  	bus/vmbus:	not in enabled drivers build config
00:02:49.121  	common/cnxk:	not in enabled drivers build config
00:02:49.121  	common/mlx5:	not in enabled drivers build config
00:02:49.121  	common/nfp:	not in enabled drivers build config
00:02:49.121  	common/nitrox:	not in enabled drivers build config
00:02:49.121  	common/qat:	not in enabled drivers build config
00:02:49.121  	common/sfc_efx:	not in enabled drivers build config
00:02:49.121  	mempool/bucket:	not in enabled drivers build config
00:02:49.121  	mempool/cnxk:	not in enabled drivers build config
00:02:49.121  	mempool/dpaa:	not in enabled drivers build config
00:02:49.121  	mempool/dpaa2:	not in enabled drivers build config
00:02:49.121  	mempool/octeontx:	not in enabled drivers build config
00:02:49.121  	mempool/stack:	not in enabled drivers build config
00:02:49.121  	dma/cnxk:	not in enabled drivers build config
00:02:49.121  	dma/dpaa:	not in enabled drivers build config
00:02:49.121  	dma/dpaa2:	not in enabled drivers build config
00:02:49.121  	dma/hisilicon:	not in enabled drivers build config
00:02:49.121  	dma/idxd:	not in enabled drivers build config
00:02:49.121  	dma/ioat:	not in enabled drivers build config
00:02:49.121  	dma/skeleton:	not in enabled drivers build config
00:02:49.121  	net/af_packet:	not in enabled drivers build config
00:02:49.121  	net/af_xdp:	not in enabled drivers build config
00:02:49.121  	net/ark:	not in enabled drivers build config
00:02:49.121  	net/atlantic:	not in enabled drivers build config
00:02:49.121  	net/avp:	not in enabled drivers build config
00:02:49.121  	net/axgbe:	not in enabled drivers build config
00:02:49.121  	net/bnx2x:	not in enabled drivers build config
00:02:49.121  	net/bnxt:	not in enabled drivers build config
00:02:49.121  	net/bonding:	not in enabled drivers build config
00:02:49.121  	net/cnxk:	not in enabled drivers build config
00:02:49.121  	net/cpfl:	not in enabled drivers build config
00:02:49.121  	net/cxgbe:	not in enabled drivers build config
00:02:49.121  	net/dpaa:	not in enabled drivers build config
00:02:49.121  	net/dpaa2:	not in enabled drivers build config
00:02:49.121  	net/e1000:	not in enabled drivers build config
00:02:49.121  	net/ena:	not in enabled drivers build config
00:02:49.121  	net/enetc:	not in enabled drivers build config
00:02:49.121  	net/enetfec:	not in enabled drivers build config
00:02:49.121  	net/enic:	not in enabled drivers build config
00:02:49.121  	net/failsafe:	not in enabled drivers build config
00:02:49.121  	net/fm10k:	not in enabled drivers build config
00:02:49.121  	net/gve:	not in enabled drivers build config
00:02:49.121  	net/hinic:	not in enabled drivers build config
00:02:49.121  	net/hns3:	not in enabled drivers build config
00:02:49.121  	net/i40e:	not in enabled drivers build config
00:02:49.121  	net/iavf:	not in enabled drivers build config
00:02:49.121  	net/ice:	not in enabled drivers build config
00:02:49.121  	net/idpf:	not in enabled drivers build config
00:02:49.121  	net/igc:	not in enabled drivers build config
00:02:49.121  	net/ionic:	not in enabled drivers build config
00:02:49.121  	net/ipn3ke:	not in enabled drivers build config
00:02:49.121  	net/ixgbe:	not in enabled drivers build config
00:02:49.121  	net/mana:	not in enabled drivers build config
00:02:49.121  	net/memif:	not in enabled drivers build config
00:02:49.121  	net/mlx4:	not in enabled drivers build config
00:02:49.121  	net/mlx5:	not in enabled drivers build config
00:02:49.121  	net/mvneta:	not in enabled drivers build config
00:02:49.121  	net/mvpp2:	not in enabled drivers build config
00:02:49.121  	net/netvsc:	not in enabled drivers build config
00:02:49.121  	net/nfb:	not in enabled drivers build config
00:02:49.121  	net/nfp:	not in enabled drivers build config
00:02:49.121  	net/ngbe:	not in enabled drivers build config
00:02:49.121  	net/null:	not in enabled drivers build config
00:02:49.121  	net/octeontx:	not in enabled drivers build config
00:02:49.121  	net/octeon_ep:	not in enabled drivers build config
00:02:49.121  	net/pcap:	not in enabled drivers build config
00:02:49.121  	net/pfe:	not in enabled drivers build config
00:02:49.121  	net/qede:	not in enabled drivers build config
00:02:49.121  	net/ring:	not in enabled drivers build config
00:02:49.121  	net/sfc:	not in enabled drivers build config
00:02:49.121  	net/softnic:	not in enabled drivers build config
00:02:49.121  	net/tap:	not in enabled drivers build config
00:02:49.121  	net/thunderx:	not in enabled drivers build config
00:02:49.121  	net/txgbe:	not in enabled drivers build config
00:02:49.121  	net/vdev_netvsc:	not in enabled drivers build config
00:02:49.121  	net/vhost:	not in enabled drivers build config
00:02:49.121  	net/virtio:	not in enabled drivers build config
00:02:49.121  	net/vmxnet3:	not in enabled drivers build config
00:02:49.121  	raw/*:	missing internal dependency, "rawdev"
00:02:49.121  	crypto/armv8:	not in enabled drivers build config
00:02:49.121  	crypto/bcmfs:	not in enabled drivers build config
00:02:49.121  	crypto/caam_jr:	not in enabled drivers build config
00:02:49.121  	crypto/ccp:	not in enabled drivers build config
00:02:49.121  	crypto/cnxk:	not in enabled drivers build config
00:02:49.121  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:49.121  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:49.121  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:49.121  	crypto/mlx5:	not in enabled drivers build config
00:02:49.121  	crypto/mvsam:	not in enabled drivers build config
00:02:49.121  	crypto/nitrox:	not in enabled drivers build config
00:02:49.121  	crypto/null:	not in enabled drivers build config
00:02:49.121  	crypto/octeontx:	not in enabled drivers build config
00:02:49.121  	crypto/openssl:	not in enabled drivers build config
00:02:49.121  	crypto/scheduler:	not in enabled drivers build config
00:02:49.121  	crypto/uadk:	not in enabled drivers build config
00:02:49.121  	crypto/virtio:	not in enabled drivers build config
00:02:49.121  	compress/isal:	not in enabled drivers build config
00:02:49.121  	compress/mlx5:	not in enabled drivers build config
00:02:49.121  	compress/nitrox:	not in enabled drivers build config
00:02:49.121  	compress/octeontx:	not in enabled drivers build config
00:02:49.121  	compress/zlib:	not in enabled drivers build config
00:02:49.121  	regex/*:	missing internal dependency, "regexdev"
00:02:49.121  	ml/*:	missing internal dependency, "mldev"
00:02:49.121  	vdpa/ifc:	not in enabled drivers build config
00:02:49.121  	vdpa/mlx5:	not in enabled drivers build config
00:02:49.121  	vdpa/nfp:	not in enabled drivers build config
00:02:49.121  	vdpa/sfc:	not in enabled drivers build config
00:02:49.121  	event/*:	missing internal dependency, "eventdev"
00:02:49.121  	baseband/*:	missing internal dependency, "bbdev"
00:02:49.121  	gpu/*:	missing internal dependency, "gpudev"
00:02:49.121  	
00:02:49.121  
00:02:49.121  Build targets in project: 85
00:02:49.121  
00:02:49.121  DPDK 24.03.0
00:02:49.121  
00:02:49.121    User defined options
00:02:49.121      buildtype          : debug
00:02:49.121      default_library    : shared
00:02:49.121      libdir             : lib
00:02:49.121      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:49.121      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:02:49.121      c_link_args        : 
00:02:49.121      cpu_instruction_set: native
00:02:49.121      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:02:49.121      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:02:49.121      enable_docs        : false
00:02:49.121      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:02:49.121      enable_kmods       : false
00:02:49.121      max_lcores         : 128
00:02:49.121      tests              : false
00:02:49.121  
00:02:49.121  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:49.121  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:02:49.121  [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:49.121  [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:49.121  [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:49.121  [4/268] Linking static target lib/librte_kvargs.a
00:02:49.121  [5/268] Linking static target lib/librte_log.a
00:02:49.121  [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:49.689  [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:49.947  [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:49.947  [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:49.947  [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:50.206  [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:50.206  [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:50.206  [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:50.206  [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:50.206  [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:50.206  [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:50.464  [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:50.464  [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:50.464  [19/268] Linking static target lib/librte_telemetry.a
00:02:50.464  [20/268] Linking target lib/librte_log.so.24.1
00:02:50.723  [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:02:50.982  [22/268] Linking target lib/librte_kvargs.so.24.1
00:02:50.982  [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:50.982  [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:50.982  [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:50.982  [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:51.240  [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:51.240  [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:51.240  [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:02:51.240  [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:51.240  [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:51.240  [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:51.498  [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:51.498  [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:51.498  [35/268] Linking target lib/librte_telemetry.so.24.1
00:02:51.756  [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:02:52.014  [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:52.014  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:52.014  [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:52.015  [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:52.015  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:52.015  [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:52.273  [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:52.273  [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:52.273  [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:52.531  [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:52.531  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:52.531  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:52.531  [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:52.531  [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:53.103  [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:53.103  [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:53.103  [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:53.362  [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:53.362  [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:53.362  [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:53.362  [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:53.621  [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:53.621  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:53.621  [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:53.621  [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:53.880  [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:53.880  [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:54.138  [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:54.397  [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:54.397  [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:54.397  [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:54.397  [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:54.655  [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:54.655  [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:54.655  [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:54.655  [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:54.912  [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:54.912  [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:54.913  [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:55.171  [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:55.171  [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:55.429  [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:55.429  [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:55.429  [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:55.687  [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:55.687  [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:55.687  [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:55.687  [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:55.945  [85/268] Linking static target lib/librte_eal.a
00:02:55.945  [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:56.203  [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:56.203  [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:56.203  [89/268] Linking static target lib/librte_rcu.a
00:02:56.203  [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:56.203  [91/268] Linking static target lib/librte_ring.a
00:02:56.203  [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:56.203  [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:56.203  [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:56.203  [95/268] Linking static target lib/librte_mempool.a
00:02:56.463  [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:56.463  [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:56.721  [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.721  [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:02:56.721  [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.721  [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:02:56.978  [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:56.978  [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:56.978  [104/268] Linking static target lib/librte_mbuf.a
00:02:56.978  [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:57.235  [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:57.235  [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:57.492  [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:57.492  [109/268] Linking static target lib/librte_net.a
00:02:57.492  [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:57.492  [111/268] Linking static target lib/librte_meter.a
00:02:57.750  [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:57.750  [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:57.750  [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:57.750  [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:58.050  [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.050  [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.308  [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:58.308  [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.594  [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:58.594  [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:58.858  [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:58.858  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:59.116  [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:59.116  [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:59.374  [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:59.374  [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:59.374  [128/268] Linking static target lib/librte_pci.a
00:02:59.374  [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:59.374  [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:59.374  [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:59.374  [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:59.633  [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:59.633  [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:59.633  [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:02:59.633  [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:59.633  [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:59.633  [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:59.633  [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:59.633  [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:59.633  [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:59.633  [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:59.633  [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:59.633  [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:59.891  [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:02:59.891  [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:59.892  [147/268] Linking static target lib/librte_ethdev.a
00:02:59.892  [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:03:00.459  [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:03:00.459  [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:03:00.459  [151/268] Linking static target lib/librte_cmdline.a
00:03:00.459  [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:03:00.459  [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:03:00.459  [154/268] Linking static target lib/librte_timer.a
00:03:00.459  [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:03:00.718  [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:03:00.976  [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:03:00.976  [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:03:00.976  [159/268] Linking static target lib/librte_hash.a
00:03:01.258  [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:03:01.258  [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:03:01.258  [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:03:01.258  [163/268] Linking static target lib/librte_compressdev.a
00:03:01.258  [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:03:01.533  [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:03:01.792  [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:03:01.792  [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:03:01.792  [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:03:01.792  [169/268] Linking static target lib/librte_dmadev.a
00:03:02.050  [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:03:02.050  [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:03:02.050  [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:03:02.050  [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:02.050  [174/268] Linking static target lib/librte_cryptodev.a
00:03:02.050  [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.309  [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.309  [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.309  [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:03:02.566  [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:03:02.566  [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:03:02.823  [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:03:02.823  [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.823  [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:03:03.082  [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:03:03.341  [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:03:03.341  [186/268] Linking static target lib/librte_power.a
00:03:03.341  [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:03:03.341  [188/268] Linking static target lib/librte_reorder.a
00:03:03.600  [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:03:03.600  [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:03:03.600  [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:03:03.600  [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:03:03.600  [193/268] Linking static target lib/librte_security.a
00:03:03.858  [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:03:04.117  [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:03:04.685  [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:04.685  [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:03:04.685  [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:03:04.685  [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:03:04.685  [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:03:04.685  [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:04.943  [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:03:05.202  [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:03:05.202  [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:03:05.202  [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:03:05.202  [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:03:05.461  [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:05.461  [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:05.461  [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:05.461  [210/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:05.719  [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:05.719  [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:05.719  [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:05.719  [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:05.719  [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:05.719  [216/268] Linking static target drivers/librte_bus_pci.a
00:03:05.981  [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:05.981  [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:05.981  [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:05.981  [220/268] Linking static target drivers/librte_bus_vdev.a
00:03:05.981  [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:05.981  [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:06.239  [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.239  [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:06.239  [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:06.240  [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:06.240  [227/268] Linking static target drivers/librte_mempool_ring.a
00:03:06.240  [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.807  [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:07.067  [230/268] Linking static target lib/librte_vhost.a
00:03:08.004  [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:08.004  [232/268] Linking target lib/librte_eal.so.24.1
00:03:08.263  [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:03:08.263  [234/268] Linking target lib/librte_meter.so.24.1
00:03:08.263  [235/268] Linking target lib/librte_pci.so.24.1
00:03:08.263  [236/268] Linking target lib/librte_timer.so.24.1
00:03:08.263  [237/268] Linking target drivers/librte_bus_vdev.so.24.1
00:03:08.263  [238/268] Linking target lib/librte_ring.so.24.1
00:03:08.263  [239/268] Linking target lib/librte_dmadev.so.24.1
00:03:08.524  [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:03:08.524  [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:03:08.524  [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:03:08.524  [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:03:08.524  [244/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:08.524  [245/268] Linking target drivers/librte_bus_pci.so.24.1
00:03:08.524  [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:03:08.524  [247/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:08.524  [248/268] Linking target lib/librte_rcu.so.24.1
00:03:08.524  [249/268] Linking target lib/librte_mempool.so.24.1
00:03:08.782  [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:03:08.782  [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:03:08.782  [252/268] Linking target drivers/librte_mempool_ring.so.24.1
00:03:08.782  [253/268] Linking target lib/librte_mbuf.so.24.1
00:03:09.040  [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:03:09.040  [255/268] Linking target lib/librte_net.so.24.1
00:03:09.040  [256/268] Linking target lib/librte_reorder.so.24.1
00:03:09.040  [257/268] Linking target lib/librte_compressdev.so.24.1
00:03:09.040  [258/268] Linking target lib/librte_cryptodev.so.24.1
00:03:09.040  [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:03:09.040  [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:03:09.299  [261/268] Linking target lib/librte_hash.so.24.1
00:03:09.299  [262/268] Linking target lib/librte_security.so.24.1
00:03:09.299  [263/268] Linking target lib/librte_cmdline.so.24.1
00:03:09.299  [264/268] Linking target lib/librte_ethdev.so.24.1
00:03:09.299  [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:03:09.299  [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:03:09.299  [267/268] Linking target lib/librte_power.so.24.1
00:03:09.299  [268/268] Linking target lib/librte_vhost.so.24.1
00:03:09.299  INFO: autodetecting backend as ninja
00:03:09.299  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:03:41.375    CC lib/log/log.o
00:03:41.375    CC lib/log/log_flags.o
00:03:41.375    CC lib/log/log_deprecated.o
00:03:41.375    CC lib/ut_mock/mock.o
00:03:41.375    CC lib/ut/ut.o
00:03:41.375    LIB libspdk_ut.a
00:03:41.375    LIB libspdk_log.a
00:03:41.375    LIB libspdk_ut_mock.a
00:03:41.375    SO libspdk_ut.so.2.0
00:03:41.375    SO libspdk_ut_mock.so.6.0
00:03:41.375    SO libspdk_log.so.7.1
00:03:41.375    SYMLINK libspdk_ut.so
00:03:41.375    SYMLINK libspdk_ut_mock.so
00:03:41.375    SYMLINK libspdk_log.so
00:03:41.375    CC lib/dma/dma.o
00:03:41.375    CC lib/ioat/ioat.o
00:03:41.375    CXX lib/trace_parser/trace.o
00:03:41.375    CC lib/util/bit_array.o
00:03:41.375    CC lib/util/base64.o
00:03:41.375    CC lib/util/cpuset.o
00:03:41.375    CC lib/util/crc16.o
00:03:41.375    CC lib/util/crc32.o
00:03:41.375    CC lib/util/crc32c.o
00:03:41.375    CC lib/vfio_user/host/vfio_user_pci.o
00:03:41.375    CC lib/util/crc32_ieee.o
00:03:41.375    CC lib/vfio_user/host/vfio_user.o
00:03:41.375    CC lib/util/crc64.o
00:03:41.375    CC lib/util/dif.o
00:03:41.375    LIB libspdk_dma.a
00:03:41.375    CC lib/util/fd.o
00:03:41.375    CC lib/util/fd_group.o
00:03:41.375    LIB libspdk_ioat.a
00:03:41.375    SO libspdk_dma.so.5.0
00:03:41.375    SO libspdk_ioat.so.7.0
00:03:41.375    SYMLINK libspdk_dma.so
00:03:41.375    SYMLINK libspdk_ioat.so
00:03:41.375    CC lib/util/file.o
00:03:41.375    CC lib/util/hexlify.o
00:03:41.375    CC lib/util/iov.o
00:03:41.375    CC lib/util/math.o
00:03:41.375    CC lib/util/net.o
00:03:41.375    CC lib/util/pipe.o
00:03:41.375    LIB libspdk_vfio_user.a
00:03:41.375    SO libspdk_vfio_user.so.5.0
00:03:41.375    CC lib/util/strerror_tls.o
00:03:41.375    SYMLINK libspdk_vfio_user.so
00:03:41.375    CC lib/util/string.o
00:03:41.375    CC lib/util/uuid.o
00:03:41.375    CC lib/util/xor.o
00:03:41.375    CC lib/util/zipf.o
00:03:41.375    CC lib/util/md5.o
00:03:41.375    LIB libspdk_util.a
00:03:41.375    LIB libspdk_trace_parser.a
00:03:41.375    SO libspdk_trace_parser.so.6.0
00:03:41.375    SO libspdk_util.so.10.1
00:03:41.375    SYMLINK libspdk_trace_parser.so
00:03:41.375    SYMLINK libspdk_util.so
00:03:41.375    CC lib/idxd/idxd.o
00:03:41.375    CC lib/idxd/idxd_user.o
00:03:41.375    CC lib/idxd/idxd_kernel.o
00:03:41.375    CC lib/json/json_parse.o
00:03:41.375    CC lib/conf/conf.o
00:03:41.375    CC lib/json/json_util.o
00:03:41.375    CC lib/json/json_write.o
00:03:41.375    CC lib/env_dpdk/env.o
00:03:41.375    CC lib/rdma_utils/rdma_utils.o
00:03:41.375    CC lib/vmd/vmd.o
00:03:41.375    LIB libspdk_conf.a
00:03:41.375    CC lib/env_dpdk/memory.o
00:03:41.375    CC lib/env_dpdk/pci.o
00:03:41.375    CC lib/env_dpdk/init.o
00:03:41.375    CC lib/env_dpdk/threads.o
00:03:41.375    SO libspdk_conf.so.6.0
00:03:41.375    LIB libspdk_json.a
00:03:41.375    LIB libspdk_rdma_utils.a
00:03:41.375    SO libspdk_rdma_utils.so.1.0
00:03:41.375    SO libspdk_json.so.6.0
00:03:41.375    SYMLINK libspdk_conf.so
00:03:41.375    CC lib/vmd/led.o
00:03:41.375    LIB libspdk_idxd.a
00:03:41.375    CC lib/env_dpdk/pci_ioat.o
00:03:41.375    SYMLINK libspdk_rdma_utils.so
00:03:41.375    CC lib/env_dpdk/pci_virtio.o
00:03:41.375    SYMLINK libspdk_json.so
00:03:41.375    SO libspdk_idxd.so.12.1
00:03:41.375    SYMLINK libspdk_idxd.so
00:03:41.375    CC lib/env_dpdk/pci_vmd.o
00:03:41.375    CC lib/env_dpdk/pci_idxd.o
00:03:41.375    LIB libspdk_vmd.a
00:03:41.375    CC lib/env_dpdk/pci_event.o
00:03:41.375    SO libspdk_vmd.so.6.0
00:03:41.375    CC lib/env_dpdk/sigbus_handler.o
00:03:41.375    CC lib/jsonrpc/jsonrpc_server.o
00:03:41.375    CC lib/env_dpdk/pci_dpdk.o
00:03:41.375    SYMLINK libspdk_vmd.so
00:03:41.375    CC lib/rdma_provider/common.o
00:03:41.375    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:41.375    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:41.375    CC lib/jsonrpc/jsonrpc_client.o
00:03:41.375    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:41.375    CC lib/rdma_provider/rdma_provider_verbs.o
00:03:41.375    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:41.375    LIB libspdk_jsonrpc.a
00:03:41.375    SO libspdk_jsonrpc.so.6.0
00:03:41.375    LIB libspdk_rdma_provider.a
00:03:41.375    SO libspdk_rdma_provider.so.7.0
00:03:41.375    LIB libspdk_env_dpdk.a
00:03:41.375    SYMLINK libspdk_jsonrpc.so
00:03:41.375    SYMLINK libspdk_rdma_provider.so
00:03:41.375    SO libspdk_env_dpdk.so.15.1
00:03:41.375    SYMLINK libspdk_env_dpdk.so
00:03:41.375    CC lib/rpc/rpc.o
00:03:41.375    LIB libspdk_rpc.a
00:03:41.375    SO libspdk_rpc.so.6.0
00:03:41.375    SYMLINK libspdk_rpc.so
00:03:41.375    CC lib/keyring/keyring.o
00:03:41.375    CC lib/keyring/keyring_rpc.o
00:03:41.375    CC lib/notify/notify.o
00:03:41.375    CC lib/notify/notify_rpc.o
00:03:41.375    CC lib/trace/trace.o
00:03:41.375    CC lib/trace/trace_flags.o
00:03:41.375    CC lib/trace/trace_rpc.o
00:03:41.375    LIB libspdk_notify.a
00:03:41.375    SO libspdk_notify.so.6.0
00:03:41.375    LIB libspdk_keyring.a
00:03:41.375    LIB libspdk_trace.a
00:03:41.375    SYMLINK libspdk_notify.so
00:03:41.375    SO libspdk_keyring.so.2.0
00:03:41.375    SO libspdk_trace.so.11.0
00:03:41.375    SYMLINK libspdk_keyring.so
00:03:41.375    SYMLINK libspdk_trace.so
00:03:41.375    CC lib/thread/thread.o
00:03:41.375    CC lib/thread/iobuf.o
00:03:41.375    CC lib/sock/sock.o
00:03:41.375    CC lib/sock/sock_rpc.o
00:03:41.375    LIB libspdk_sock.a
00:03:41.375    SO libspdk_sock.so.10.0
00:03:41.375    SYMLINK libspdk_sock.so
00:03:41.375    LIB libspdk_thread.a
00:03:41.375    SO libspdk_thread.so.11.0
00:03:41.634    CC lib/nvme/nvme_ctrlr.o
00:03:41.634    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:41.634    CC lib/nvme/nvme_fabric.o
00:03:41.635    CC lib/nvme/nvme_ns_cmd.o
00:03:41.635    CC lib/nvme/nvme_pcie_common.o
00:03:41.635    CC lib/nvme/nvme_pcie.o
00:03:41.635    CC lib/nvme/nvme_ns.o
00:03:41.635    CC lib/nvme/nvme_qpair.o
00:03:41.635    CC lib/nvme/nvme.o
00:03:41.635    SYMLINK libspdk_thread.so
00:03:41.635    CC lib/nvme/nvme_quirks.o
00:03:42.202    CC lib/nvme/nvme_transport.o
00:03:42.202    CC lib/nvme/nvme_discovery.o
00:03:42.202    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:42.460    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:42.460    CC lib/accel/accel.o
00:03:42.460    CC lib/blob/blobstore.o
00:03:42.460    CC lib/nvme/nvme_tcp.o
00:03:42.460    CC lib/nvme/nvme_opal.o
00:03:42.460    CC lib/nvme/nvme_io_msg.o
00:03:42.720    CC lib/nvme/nvme_poll_group.o
00:03:42.720    CC lib/blob/request.o
00:03:42.979    CC lib/blob/zeroes.o
00:03:42.979    CC lib/accel/accel_rpc.o
00:03:42.979    CC lib/blob/blob_bs_dev.o
00:03:42.979    CC lib/nvme/nvme_zns.o
00:03:42.979    CC lib/nvme/nvme_stubs.o
00:03:42.979    CC lib/nvme/nvme_auth.o
00:03:42.979    CC lib/nvme/nvme_cuse.o
00:03:42.979    CC lib/accel/accel_sw.o
00:03:43.302    CC lib/init/json_config.o
00:03:43.303    LIB libspdk_accel.a
00:03:43.303    CC lib/virtio/virtio.o
00:03:43.303    SO libspdk_accel.so.16.0
00:03:43.303    CC lib/virtio/virtio_vhost_user.o
00:03:43.303    SYMLINK libspdk_accel.so
00:03:43.561    CC lib/virtio/virtio_vfio_user.o
00:03:43.561    CC lib/init/subsystem.o
00:03:43.561    CC lib/virtio/virtio_pci.o
00:03:43.561    LIB libspdk_blob.a
00:03:43.561    CC lib/nvme/nvme_rdma.o
00:03:43.561    CC lib/fsdev/fsdev.o
00:03:43.561    CC lib/fsdev/fsdev_io.o
00:03:43.561    SO libspdk_blob.so.11.0
00:03:43.561    CC lib/init/subsystem_rpc.o
00:03:43.561    CC lib/fsdev/fsdev_rpc.o
00:03:43.819    SYMLINK libspdk_blob.so
00:03:43.820    CC lib/init/rpc.o
00:03:43.820    CC lib/bdev/bdev.o
00:03:43.820    LIB libspdk_virtio.a
00:03:43.820    CC lib/bdev/bdev_rpc.o
00:03:43.820    SO libspdk_virtio.so.7.0
00:03:43.820    CC lib/bdev/bdev_zone.o
00:03:43.820    CC lib/bdev/part.o
00:03:43.820    SYMLINK libspdk_virtio.so
00:03:43.820    CC lib/bdev/scsi_nvme.o
00:03:43.820    LIB libspdk_init.a
00:03:44.078    SO libspdk_init.so.6.0
00:03:44.078    CC lib/blobfs/blobfs.o
00:03:44.078    SYMLINK libspdk_init.so
00:03:44.078    CC lib/lvol/lvol.o
00:03:44.078    CC lib/blobfs/tree.o
00:03:44.078    LIB libspdk_fsdev.a
00:03:44.078    SO libspdk_fsdev.so.2.0
00:03:44.078    SYMLINK libspdk_fsdev.so
00:03:44.413    CC lib/event/app.o
00:03:44.413    CC lib/event/reactor.o
00:03:44.413    CC lib/event/app_rpc.o
00:03:44.413    CC lib/event/scheduler_static.o
00:03:44.413    CC lib/event/log_rpc.o
00:03:44.413    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:03:44.413    LIB libspdk_blobfs.a
00:03:44.413    SO libspdk_blobfs.so.10.0
00:03:44.413    LIB libspdk_lvol.a
00:03:44.413    LIB libspdk_nvme.a
00:03:44.413    SYMLINK libspdk_blobfs.so
00:03:44.413    SO libspdk_lvol.so.10.0
00:03:44.672    SYMLINK libspdk_lvol.so
00:03:44.672    LIB libspdk_event.a
00:03:44.672    SO libspdk_nvme.so.15.0
00:03:44.672    SO libspdk_event.so.14.0
00:03:44.672    LIB libspdk_fuse_dispatcher.a
00:03:44.931    SO libspdk_fuse_dispatcher.so.1.0
00:03:44.931    SYMLINK libspdk_event.so
00:03:44.931    SYMLINK libspdk_fuse_dispatcher.so
00:03:44.931    SYMLINK libspdk_nvme.so
00:03:44.931    LIB libspdk_bdev.a
00:03:44.931    SO libspdk_bdev.so.17.0
00:03:45.190    SYMLINK libspdk_bdev.so
00:03:45.190    CC lib/ftl/ftl_core.o
00:03:45.190    CC lib/ftl/ftl_init.o
00:03:45.190    CC lib/ftl/ftl_layout.o
00:03:45.190    CC lib/ublk/ublk.o
00:03:45.190    CC lib/ublk/ublk_rpc.o
00:03:45.190    CC lib/ftl/ftl_io.o
00:03:45.190    CC lib/ftl/ftl_debug.o
00:03:45.190    CC lib/nvmf/ctrlr.o
00:03:45.190    CC lib/scsi/dev.o
00:03:45.190    CC lib/nbd/nbd.o
00:03:45.447    CC lib/nbd/nbd_rpc.o
00:03:45.447    CC lib/scsi/lun.o
00:03:45.447    CC lib/ftl/ftl_sb.o
00:03:45.447    CC lib/ftl/ftl_l2p.o
00:03:45.447    CC lib/nvmf/ctrlr_discovery.o
00:03:45.447    CC lib/ftl/ftl_l2p_flat.o
00:03:45.447    CC lib/ftl/ftl_nv_cache.o
00:03:45.705    CC lib/ftl/ftl_band.o
00:03:45.705    LIB libspdk_nbd.a
00:03:45.705    LIB libspdk_ublk.a
00:03:45.705    SO libspdk_nbd.so.7.0
00:03:45.705    SO libspdk_ublk.so.3.0
00:03:45.705    CC lib/ftl/ftl_band_ops.o
00:03:45.705    CC lib/nvmf/ctrlr_bdev.o
00:03:45.705    SYMLINK libspdk_nbd.so
00:03:45.705    CC lib/nvmf/subsystem.o
00:03:45.705    CC lib/scsi/port.o
00:03:45.705    SYMLINK libspdk_ublk.so
00:03:45.705    CC lib/ftl/ftl_writer.o
00:03:45.705    CC lib/nvmf/nvmf.o
00:03:45.705    CC lib/nvmf/nvmf_rpc.o
00:03:45.962    CC lib/ftl/ftl_rq.o
00:03:45.962    CC lib/scsi/scsi.o
00:03:45.962    CC lib/nvmf/transport.o
00:03:45.962    CC lib/scsi/scsi_bdev.o
00:03:45.962    CC lib/nvmf/tcp.o
00:03:45.962    CC lib/ftl/ftl_reloc.o
00:03:45.962    CC lib/nvmf/stubs.o
00:03:45.962    CC lib/nvmf/mdns_server.o
00:03:45.962    CC lib/nvmf/rdma.o
00:03:46.220    CC lib/nvmf/auth.o
00:03:46.220    CC lib/ftl/ftl_l2p_cache.o
00:03:46.220    CC lib/scsi/scsi_pr.o
00:03:46.220    CC lib/scsi/scsi_rpc.o
00:03:46.220    CC lib/scsi/task.o
00:03:46.220    CC lib/ftl/ftl_p2l.o
00:03:46.220    CC lib/ftl/ftl_p2l_log.o
00:03:46.220    CC lib/ftl/mngt/ftl_mngt.o
00:03:46.478    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:46.478    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:46.479    LIB libspdk_scsi.a
00:03:46.479    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:46.479    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:46.479    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:46.479    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:46.479    SO libspdk_scsi.so.9.0
00:03:46.479    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:46.737    SYMLINK libspdk_scsi.so
00:03:46.737    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:46.737    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:46.737    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:46.737    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:46.737    LIB libspdk_nvmf.a
00:03:46.737    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:46.737    CC lib/ftl/utils/ftl_conf.o
00:03:46.737    CC lib/iscsi/conn.o
00:03:46.737    CC lib/vhost/vhost.o
00:03:46.737    CC lib/iscsi/init_grp.o
00:03:46.737    SO libspdk_nvmf.so.20.0
00:03:46.995    CC lib/vhost/vhost_rpc.o
00:03:46.995    CC lib/ftl/utils/ftl_md.o
00:03:46.995    CC lib/ftl/utils/ftl_mempool.o
00:03:46.995    CC lib/ftl/utils/ftl_bitmap.o
00:03:46.995    SYMLINK libspdk_nvmf.so
00:03:46.995    CC lib/ftl/utils/ftl_property.o
00:03:46.995    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:46.995    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:47.254    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:47.254    CC lib/vhost/vhost_scsi.o
00:03:47.254    CC lib/iscsi/iscsi.o
00:03:47.254    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:47.254    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:47.254    CC lib/vhost/vhost_blk.o
00:03:47.254    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:47.254    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:03:47.254    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:47.512    CC lib/vhost/rte_vhost_user.o
00:03:47.512    CC lib/iscsi/param.o
00:03:47.512    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:47.512    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:47.512    CC lib/iscsi/portal_grp.o
00:03:47.512    CC lib/iscsi/tgt_node.o
00:03:47.512    CC lib/iscsi/iscsi_subsystem.o
00:03:47.770    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:47.770    CC lib/iscsi/iscsi_rpc.o
00:03:47.770    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:03:47.770    CC lib/iscsi/task.o
00:03:47.770    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:03:47.770    CC lib/ftl/base/ftl_base_dev.o
00:03:47.770    CC lib/ftl/base/ftl_base_bdev.o
00:03:48.027    CC lib/ftl/ftl_trace.o
00:03:48.027    LIB libspdk_iscsi.a
00:03:48.027    LIB libspdk_ftl.a
00:03:48.027    SO libspdk_iscsi.so.8.0
00:03:48.285    SYMLINK libspdk_iscsi.so
00:03:48.285    SO libspdk_ftl.so.9.0
00:03:48.543    LIB libspdk_vhost.a
00:03:48.543    SO libspdk_vhost.so.8.0
00:03:48.543    SYMLINK libspdk_ftl.so
00:03:48.800    SYMLINK libspdk_vhost.so
00:03:49.058    CC module/env_dpdk/env_dpdk_rpc.o
00:03:49.058    CC module/keyring/file/keyring.o
00:03:49.058    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:49.058    CC module/accel/error/accel_error.o
00:03:49.058    CC module/blob/bdev/blob_bdev.o
00:03:49.058    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:49.058    CC module/scheduler/gscheduler/gscheduler.o
00:03:49.058    CC module/accel/ioat/accel_ioat.o
00:03:49.316    CC module/fsdev/aio/fsdev_aio.o
00:03:49.316    CC module/sock/posix/posix.o
00:03:49.316    LIB libspdk_env_dpdk_rpc.a
00:03:49.316    SO libspdk_env_dpdk_rpc.so.6.0
00:03:49.316    CC module/keyring/file/keyring_rpc.o
00:03:49.316    LIB libspdk_scheduler_dpdk_governor.a
00:03:49.316    LIB libspdk_scheduler_gscheduler.a
00:03:49.316    SYMLINK libspdk_env_dpdk_rpc.so
00:03:49.316    CC module/fsdev/aio/fsdev_aio_rpc.o
00:03:49.316    CC module/accel/error/accel_error_rpc.o
00:03:49.316    SO libspdk_scheduler_gscheduler.so.4.0
00:03:49.316    SO libspdk_scheduler_dpdk_governor.so.4.0
00:03:49.316    CC module/accel/ioat/accel_ioat_rpc.o
00:03:49.316    LIB libspdk_scheduler_dynamic.a
00:03:49.316    LIB libspdk_blob_bdev.a
00:03:49.316    SO libspdk_scheduler_dynamic.so.4.0
00:03:49.316    SO libspdk_blob_bdev.so.11.0
00:03:49.316    SYMLINK libspdk_scheduler_gscheduler.so
00:03:49.316    SYMLINK libspdk_scheduler_dpdk_governor.so
00:03:49.316    CC module/fsdev/aio/linux_aio_mgr.o
00:03:49.316    SYMLINK libspdk_scheduler_dynamic.so
00:03:49.574    SYMLINK libspdk_blob_bdev.so
00:03:49.574    LIB libspdk_keyring_file.a
00:03:49.574    LIB libspdk_accel_error.a
00:03:49.574    SO libspdk_keyring_file.so.2.0
00:03:49.574    LIB libspdk_accel_ioat.a
00:03:49.574    SO libspdk_accel_error.so.2.0
00:03:49.574    SO libspdk_accel_ioat.so.6.0
00:03:49.574    SYMLINK libspdk_keyring_file.so
00:03:49.574    SYMLINK libspdk_accel_error.so
00:03:49.574    SYMLINK libspdk_accel_ioat.so
00:03:49.574    CC module/keyring/linux/keyring.o
00:03:49.574    CC module/keyring/linux/keyring_rpc.o
00:03:49.574    CC module/accel/dsa/accel_dsa.o
00:03:49.574    CC module/accel/iaa/accel_iaa.o
00:03:49.832    LIB libspdk_sock_posix.a
00:03:49.832    SO libspdk_sock_posix.so.6.0
00:03:49.832    LIB libspdk_fsdev_aio.a
00:03:49.832    CC module/bdev/gpt/gpt.o
00:03:49.832    CC module/bdev/gpt/vbdev_gpt.o
00:03:49.832    SO libspdk_fsdev_aio.so.1.0
00:03:49.832    LIB libspdk_keyring_linux.a
00:03:49.832    CC module/bdev/delay/vbdev_delay.o
00:03:49.832    CC module/bdev/error/vbdev_error.o
00:03:49.832    SO libspdk_keyring_linux.so.1.0
00:03:49.832    SYMLINK libspdk_sock_posix.so
00:03:49.832    CC module/accel/dsa/accel_dsa_rpc.o
00:03:49.832    CC module/accel/iaa/accel_iaa_rpc.o
00:03:49.832    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:49.832    SYMLINK libspdk_fsdev_aio.so
00:03:49.832    SYMLINK libspdk_keyring_linux.so
00:03:49.832    CC module/blobfs/bdev/blobfs_bdev.o
00:03:50.091    LIB libspdk_accel_dsa.a
00:03:50.091    LIB libspdk_bdev_gpt.a
00:03:50.091    LIB libspdk_accel_iaa.a
00:03:50.091    SO libspdk_accel_dsa.so.5.0
00:03:50.091    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:50.091    LIB libspdk_bdev_delay.a
00:03:50.091    CC module/bdev/error/vbdev_error_rpc.o
00:03:50.091    SO libspdk_bdev_gpt.so.6.0
00:03:50.091    CC module/bdev/lvol/vbdev_lvol.o
00:03:50.091    CC module/bdev/malloc/bdev_malloc.o
00:03:50.091    SO libspdk_accel_iaa.so.3.0
00:03:50.091    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:50.091    SO libspdk_bdev_delay.so.6.0
00:03:50.091    SYMLINK libspdk_accel_dsa.so
00:03:50.091    SYMLINK libspdk_bdev_gpt.so
00:03:50.091    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:50.091    SYMLINK libspdk_accel_iaa.so
00:03:50.091    SYMLINK libspdk_bdev_delay.so
00:03:50.091    CC module/bdev/null/bdev_null.o
00:03:50.350    LIB libspdk_blobfs_bdev.a
00:03:50.350    LIB libspdk_bdev_error.a
00:03:50.350    SO libspdk_blobfs_bdev.so.6.0
00:03:50.350    SO libspdk_bdev_error.so.6.0
00:03:50.350    LIB libspdk_bdev_malloc.a
00:03:50.350    SYMLINK libspdk_blobfs_bdev.so
00:03:50.350    CC module/bdev/passthru/vbdev_passthru.o
00:03:50.350    SYMLINK libspdk_bdev_error.so
00:03:50.350    CC module/bdev/raid/bdev_raid.o
00:03:50.350    SO libspdk_bdev_malloc.so.6.0
00:03:50.350    CC module/bdev/nvme/bdev_nvme.o
00:03:50.350    CC module/bdev/null/bdev_null_rpc.o
00:03:50.350    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:50.350    CC module/bdev/split/vbdev_split.o
00:03:50.350    SYMLINK libspdk_bdev_malloc.so
00:03:50.350    LIB libspdk_bdev_lvol.a
00:03:50.608    SO libspdk_bdev_lvol.so.6.0
00:03:50.608    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:50.608    CC module/bdev/aio/bdev_aio.o
00:03:50.608    SYMLINK libspdk_bdev_lvol.so
00:03:50.608    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:50.608    LIB libspdk_bdev_null.a
00:03:50.608    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:50.608    CC module/bdev/ftl/bdev_ftl.o
00:03:50.608    SO libspdk_bdev_null.so.6.0
00:03:50.608    CC module/bdev/split/vbdev_split_rpc.o
00:03:50.608    SYMLINK libspdk_bdev_null.so
00:03:50.608    CC module/bdev/aio/bdev_aio_rpc.o
00:03:50.866    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:50.866    LIB libspdk_bdev_passthru.a
00:03:50.866    LIB libspdk_bdev_zone_block.a
00:03:50.866    CC module/bdev/nvme/nvme_rpc.o
00:03:50.866    SO libspdk_bdev_passthru.so.6.0
00:03:50.866    SO libspdk_bdev_zone_block.so.6.0
00:03:50.866    LIB libspdk_bdev_split.a
00:03:50.866    CC module/bdev/nvme/bdev_mdns_client.o
00:03:50.866    CC module/bdev/nvme/vbdev_opal.o
00:03:50.867    SO libspdk_bdev_split.so.6.0
00:03:50.867    LIB libspdk_bdev_aio.a
00:03:50.867    SYMLINK libspdk_bdev_passthru.so
00:03:50.867    SYMLINK libspdk_bdev_zone_block.so
00:03:50.867    CC module/bdev/raid/bdev_raid_rpc.o
00:03:50.867    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:50.867    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:50.867    SO libspdk_bdev_aio.so.6.0
00:03:50.867    SYMLINK libspdk_bdev_split.so
00:03:50.867    CC module/bdev/raid/bdev_raid_sb.o
00:03:50.867    LIB libspdk_bdev_ftl.a
00:03:51.125    SYMLINK libspdk_bdev_aio.so
00:03:51.125    CC module/bdev/raid/raid0.o
00:03:51.125    SO libspdk_bdev_ftl.so.6.0
00:03:51.125    SYMLINK libspdk_bdev_ftl.so
00:03:51.125    CC module/bdev/raid/raid1.o
00:03:51.125    CC module/bdev/raid/concat.o
00:03:51.125    CC module/bdev/iscsi/bdev_iscsi.o
00:03:51.125    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:51.125    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:51.125    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:51.125    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:51.384    LIB libspdk_bdev_raid.a
00:03:51.384    SO libspdk_bdev_raid.so.6.0
00:03:51.384    LIB libspdk_bdev_iscsi.a
00:03:51.384    LIB libspdk_bdev_nvme.a
00:03:51.384    SYMLINK libspdk_bdev_raid.so
00:03:51.384    SO libspdk_bdev_iscsi.so.6.0
00:03:51.384    LIB libspdk_bdev_virtio.a
00:03:51.642    SO libspdk_bdev_nvme.so.7.1
00:03:51.642    SO libspdk_bdev_virtio.so.6.0
00:03:51.642    SYMLINK libspdk_bdev_iscsi.so
00:03:51.642    SYMLINK libspdk_bdev_nvme.so
00:03:51.642    SYMLINK libspdk_bdev_virtio.so
00:03:52.209    CC module/event/subsystems/sock/sock.o
00:03:52.209    CC module/event/subsystems/vmd/vmd.o
00:03:52.209    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:52.209    CC module/event/subsystems/fsdev/fsdev.o
00:03:52.209    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:52.209    CC module/event/subsystems/iobuf/iobuf.o
00:03:52.209    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:52.209    CC module/event/subsystems/keyring/keyring.o
00:03:52.209    CC module/event/subsystems/scheduler/scheduler.o
00:03:52.209    LIB libspdk_event_keyring.a
00:03:52.209    LIB libspdk_event_fsdev.a
00:03:52.209    LIB libspdk_event_sock.a
00:03:52.209    LIB libspdk_event_vmd.a
00:03:52.209    LIB libspdk_event_iobuf.a
00:03:52.209    SO libspdk_event_keyring.so.1.0
00:03:52.209    SO libspdk_event_fsdev.so.1.0
00:03:52.209    SO libspdk_event_sock.so.5.0
00:03:52.209    LIB libspdk_event_vhost_blk.a
00:03:52.209    LIB libspdk_event_scheduler.a
00:03:52.209    SO libspdk_event_vmd.so.6.0
00:03:52.209    SO libspdk_event_iobuf.so.3.0
00:03:52.209    SO libspdk_event_vhost_blk.so.3.0
00:03:52.209    SYMLINK libspdk_event_keyring.so
00:03:52.209    SYMLINK libspdk_event_fsdev.so
00:03:52.209    SO libspdk_event_scheduler.so.4.0
00:03:52.209    SYMLINK libspdk_event_sock.so
00:03:52.209    SYMLINK libspdk_event_vmd.so
00:03:52.209    SYMLINK libspdk_event_iobuf.so
00:03:52.209    SYMLINK libspdk_event_vhost_blk.so
00:03:52.209    SYMLINK libspdk_event_scheduler.so
00:03:52.468    CC module/event/subsystems/accel/accel.o
00:03:52.727    LIB libspdk_event_accel.a
00:03:52.727    SO libspdk_event_accel.so.6.0
00:03:52.727    SYMLINK libspdk_event_accel.so
00:03:52.985    CC module/event/subsystems/bdev/bdev.o
00:03:53.244    LIB libspdk_event_bdev.a
00:03:53.244    SO libspdk_event_bdev.so.6.0
00:03:53.504    SYMLINK libspdk_event_bdev.so
00:03:53.762    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:53.762    CC module/event/subsystems/scsi/scsi.o
00:03:53.762    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:53.762    CC module/event/subsystems/ublk/ublk.o
00:03:53.762    CC module/event/subsystems/nbd/nbd.o
00:03:53.762    LIB libspdk_event_nbd.a
00:03:53.762    LIB libspdk_event_ublk.a
00:03:53.762    LIB libspdk_event_scsi.a
00:03:53.762    SO libspdk_event_ublk.so.3.0
00:03:53.762    SO libspdk_event_nbd.so.6.0
00:03:53.762    SO libspdk_event_scsi.so.6.0
00:03:54.021    SYMLINK libspdk_event_ublk.so
00:03:54.021    SYMLINK libspdk_event_nbd.so
00:03:54.021    LIB libspdk_event_nvmf.a
00:03:54.021    SYMLINK libspdk_event_scsi.so
00:03:54.021    SO libspdk_event_nvmf.so.6.0
00:03:54.021    SYMLINK libspdk_event_nvmf.so
00:03:54.280    CC module/event/subsystems/iscsi/iscsi.o
00:03:54.280    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:54.280    LIB libspdk_event_vhost_scsi.a
00:03:54.280    SO libspdk_event_vhost_scsi.so.3.0
00:03:54.537    LIB libspdk_event_iscsi.a
00:03:54.537    SYMLINK libspdk_event_vhost_scsi.so
00:03:54.537    SO libspdk_event_iscsi.so.6.0
00:03:54.537    SYMLINK libspdk_event_iscsi.so
00:03:54.796    SO libspdk.so.6.0
00:03:54.796    SYMLINK libspdk.so
00:03:55.055    CC app/trace_record/trace_record.o
00:03:55.055    CXX app/trace/trace.o
00:03:55.055    CC app/spdk_lspci/spdk_lspci.o
00:03:55.055    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:55.055    CC app/nvmf_tgt/nvmf_main.o
00:03:55.055    CC app/iscsi_tgt/iscsi_tgt.o
00:03:55.055    CC app/spdk_tgt/spdk_tgt.o
00:03:55.055    CC examples/ioat/perf/perf.o
00:03:55.055    CC examples/util/zipf/zipf.o
00:03:55.055    CC test/thread/poller_perf/poller_perf.o
00:03:55.055    LINK spdk_lspci
00:03:55.055    LINK spdk_trace_record
00:03:55.313    LINK interrupt_tgt
00:03:55.313    LINK zipf
00:03:55.313    LINK poller_perf
00:03:55.313    LINK nvmf_tgt
00:03:55.313    LINK iscsi_tgt
00:03:55.313    LINK ioat_perf
00:03:55.313    LINK spdk_tgt
00:03:55.313    LINK spdk_trace
00:03:55.313    CC app/spdk_nvme_perf/perf.o
00:03:55.313    CC app/spdk_nvme_identify/identify.o
00:03:55.571    CC app/spdk_nvme_discover/discovery_aer.o
00:03:55.571    CC examples/ioat/verify/verify.o
00:03:55.571    CC app/spdk_top/spdk_top.o
00:03:55.571    TEST_HEADER include/spdk/accel.h
00:03:55.571    TEST_HEADER include/spdk/accel_module.h
00:03:55.571    TEST_HEADER include/spdk/assert.h
00:03:55.571    TEST_HEADER include/spdk/barrier.h
00:03:55.571    TEST_HEADER include/spdk/base64.h
00:03:55.571    TEST_HEADER include/spdk/bdev.h
00:03:55.571    CC test/dma/test_dma/test_dma.o
00:03:55.571    TEST_HEADER include/spdk/bdev_module.h
00:03:55.571    TEST_HEADER include/spdk/bdev_zone.h
00:03:55.571    TEST_HEADER include/spdk/bit_array.h
00:03:55.571    TEST_HEADER include/spdk/bit_pool.h
00:03:55.571    TEST_HEADER include/spdk/blob_bdev.h
00:03:55.571    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:55.571    TEST_HEADER include/spdk/blobfs.h
00:03:55.571    TEST_HEADER include/spdk/blob.h
00:03:55.571    CC test/app/bdev_svc/bdev_svc.o
00:03:55.571    TEST_HEADER include/spdk/conf.h
00:03:55.571    TEST_HEADER include/spdk/config.h
00:03:55.571    TEST_HEADER include/spdk/cpuset.h
00:03:55.571    TEST_HEADER include/spdk/crc16.h
00:03:55.571    TEST_HEADER include/spdk/crc32.h
00:03:55.571    TEST_HEADER include/spdk/crc64.h
00:03:55.571    TEST_HEADER include/spdk/dif.h
00:03:55.571    TEST_HEADER include/spdk/dma.h
00:03:55.571    TEST_HEADER include/spdk/endian.h
00:03:55.571    TEST_HEADER include/spdk/env_dpdk.h
00:03:55.571    TEST_HEADER include/spdk/env.h
00:03:55.571    TEST_HEADER include/spdk/event.h
00:03:55.571    TEST_HEADER include/spdk/fd_group.h
00:03:55.571    TEST_HEADER include/spdk/fd.h
00:03:55.571    TEST_HEADER include/spdk/file.h
00:03:55.571    TEST_HEADER include/spdk/fsdev.h
00:03:55.571    TEST_HEADER include/spdk/fsdev_module.h
00:03:55.571    TEST_HEADER include/spdk/ftl.h
00:03:55.571    TEST_HEADER include/spdk/fuse_dispatcher.h
00:03:55.571    TEST_HEADER include/spdk/gpt_spec.h
00:03:55.571    TEST_HEADER include/spdk/hexlify.h
00:03:55.571    TEST_HEADER include/spdk/histogram_data.h
00:03:55.571    TEST_HEADER include/spdk/idxd.h
00:03:55.571    TEST_HEADER include/spdk/idxd_spec.h
00:03:55.571    TEST_HEADER include/spdk/init.h
00:03:55.571    TEST_HEADER include/spdk/ioat.h
00:03:55.571    TEST_HEADER include/spdk/ioat_spec.h
00:03:55.571    TEST_HEADER include/spdk/iscsi_spec.h
00:03:55.571    TEST_HEADER include/spdk/json.h
00:03:55.571    TEST_HEADER include/spdk/jsonrpc.h
00:03:55.571    TEST_HEADER include/spdk/keyring.h
00:03:55.571    TEST_HEADER include/spdk/keyring_module.h
00:03:55.571    TEST_HEADER include/spdk/likely.h
00:03:55.571    CC app/spdk_dd/spdk_dd.o
00:03:55.571    TEST_HEADER include/spdk/log.h
00:03:55.571    TEST_HEADER include/spdk/lvol.h
00:03:55.571    CC examples/thread/thread/thread_ex.o
00:03:55.571    TEST_HEADER include/spdk/md5.h
00:03:55.571    TEST_HEADER include/spdk/memory.h
00:03:55.571    LINK spdk_nvme_discover
00:03:55.571    TEST_HEADER include/spdk/mmio.h
00:03:55.571    TEST_HEADER include/spdk/nbd.h
00:03:55.571    TEST_HEADER include/spdk/net.h
00:03:55.571    TEST_HEADER include/spdk/notify.h
00:03:55.571    TEST_HEADER include/spdk/nvme.h
00:03:55.571    LINK verify
00:03:55.571    TEST_HEADER include/spdk/nvme_intel.h
00:03:55.571    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:55.830    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:55.830    TEST_HEADER include/spdk/nvme_spec.h
00:03:55.830    TEST_HEADER include/spdk/nvme_zns.h
00:03:55.830    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:55.830    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:55.830    TEST_HEADER include/spdk/nvmf.h
00:03:55.830    TEST_HEADER include/spdk/nvmf_spec.h
00:03:55.830    TEST_HEADER include/spdk/nvmf_transport.h
00:03:55.830    TEST_HEADER include/spdk/opal.h
00:03:55.830    TEST_HEADER include/spdk/opal_spec.h
00:03:55.830    TEST_HEADER include/spdk/pci_ids.h
00:03:55.830    TEST_HEADER include/spdk/pipe.h
00:03:55.830    TEST_HEADER include/spdk/queue.h
00:03:55.830    TEST_HEADER include/spdk/reduce.h
00:03:55.830    TEST_HEADER include/spdk/rpc.h
00:03:55.830    TEST_HEADER include/spdk/scheduler.h
00:03:55.830    TEST_HEADER include/spdk/scsi.h
00:03:55.830    TEST_HEADER include/spdk/scsi_spec.h
00:03:55.830    TEST_HEADER include/spdk/sock.h
00:03:55.830    TEST_HEADER include/spdk/stdinc.h
00:03:55.830    TEST_HEADER include/spdk/string.h
00:03:55.830    TEST_HEADER include/spdk/thread.h
00:03:55.830    TEST_HEADER include/spdk/trace.h
00:03:55.830    TEST_HEADER include/spdk/trace_parser.h
00:03:55.830    TEST_HEADER include/spdk/tree.h
00:03:55.830    TEST_HEADER include/spdk/ublk.h
00:03:55.830    TEST_HEADER include/spdk/util.h
00:03:55.830    TEST_HEADER include/spdk/uuid.h
00:03:55.830    TEST_HEADER include/spdk/version.h
00:03:55.830    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:55.830    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:55.830    TEST_HEADER include/spdk/vhost.h
00:03:55.830    TEST_HEADER include/spdk/vmd.h
00:03:55.830    TEST_HEADER include/spdk/xor.h
00:03:55.830    TEST_HEADER include/spdk/zipf.h
00:03:55.830    CXX test/cpp_headers/accel.o
00:03:55.830    CXX test/cpp_headers/accel_module.o
00:03:55.830    LINK bdev_svc
00:03:55.830    LINK spdk_nvme_perf
00:03:55.830    LINK thread
00:03:55.830    LINK spdk_nvme_identify
00:03:56.089    LINK spdk_dd
00:03:56.089    CXX test/cpp_headers/assert.o
00:03:56.089    CXX test/cpp_headers/barrier.o
00:03:56.089    LINK test_dma
00:03:56.089    LINK spdk_top
00:03:56.089    CXX test/cpp_headers/base64.o
00:03:56.089    CC test/env/mem_callbacks/mem_callbacks.o
00:03:56.089    CC app/fio/nvme/fio_plugin.o
00:03:56.089    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:56.089    CC test/env/vtophys/vtophys.o
00:03:56.347    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:56.347    CC test/env/memory/memory_ut.o
00:03:56.347    CC test/app/histogram_perf/histogram_perf.o
00:03:56.347    CXX test/cpp_headers/bdev.o
00:03:56.347    CC examples/sock/hello_world/hello_sock.o
00:03:56.347    CC app/vhost/vhost.o
00:03:56.347    LINK vtophys
00:03:56.347    LINK env_dpdk_post_init
00:03:56.347    LINK histogram_perf
00:03:56.347    LINK nvme_fuzz
00:03:56.347    CXX test/cpp_headers/bdev_module.o
00:03:56.347    LINK hello_sock
00:03:56.606    LINK spdk_nvme
00:03:56.606    LINK vhost
00:03:56.606    CXX test/cpp_headers/bdev_zone.o
00:03:56.606    CC test/env/pci/pci_ut.o
00:03:56.606    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:56.606    CC examples/vmd/lsvmd/lsvmd.o
00:03:56.606    CXX test/cpp_headers/bit_array.o
00:03:56.606    CC app/fio/bdev/fio_plugin.o
00:03:56.606    LINK mem_callbacks
00:03:56.865    CC examples/idxd/perf/perf.o
00:03:56.865    LINK lsvmd
00:03:56.865    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:56.865    CXX test/cpp_headers/bit_pool.o
00:03:56.865    LINK pci_ut
00:03:56.865    CC examples/vmd/led/led.o
00:03:56.865    CC test/app/jsoncat/jsoncat.o
00:03:56.865    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:57.127    LINK idxd_perf
00:03:57.127    LINK led
00:03:57.127    CXX test/cpp_headers/blob_bdev.o
00:03:57.127    LINK jsoncat
00:03:57.127    LINK spdk_bdev
00:03:57.127    CC test/app/stub/stub.o
00:03:57.127    LINK memory_ut
00:03:57.127    LINK vhost_fuzz
00:03:57.460    LINK stub
00:03:57.460    CXX test/cpp_headers/blobfs_bdev.o
00:03:57.460    CC examples/accel/perf/accel_perf.o
00:03:57.460    CC examples/fsdev/hello_world/hello_fsdev.o
00:03:57.460    LINK iscsi_fuzz
00:03:57.460    CC test/event/event_perf/event_perf.o
00:03:57.460    CC examples/nvme/hello_world/hello_world.o
00:03:57.460    CC test/event/reactor/reactor.o
00:03:57.460    CC examples/blob/hello_world/hello_blob.o
00:03:57.460    CXX test/cpp_headers/blobfs.o
00:03:57.460    CC test/nvme/aer/aer.o
00:03:57.460    CC test/rpc_client/rpc_client_test.o
00:03:57.460    LINK accel_perf
00:03:57.718    LINK event_perf
00:03:57.718    LINK reactor
00:03:57.718    LINK hello_fsdev
00:03:57.718    LINK hello_world
00:03:57.718    LINK hello_blob
00:03:57.718    CXX test/cpp_headers/blob.o
00:03:57.718    LINK rpc_client_test
00:03:57.718    CC test/accel/dif/dif.o
00:03:57.718    CC test/event/reactor_perf/reactor_perf.o
00:03:57.718    LINK aer
00:03:57.718    CC test/nvme/reset/reset.o
00:03:57.977    CC test/event/app_repeat/app_repeat.o
00:03:57.977    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:57.977    CC examples/nvme/reconnect/reconnect.o
00:03:57.977    CXX test/cpp_headers/conf.o
00:03:57.977    LINK reactor_perf
00:03:57.977    CC examples/nvme/arbitration/arbitration.o
00:03:57.977    CC examples/blob/cli/blobcli.o
00:03:57.977    CC examples/nvme/hotplug/hotplug.o
00:03:57.977    LINK app_repeat
00:03:57.977    LINK reset
00:03:57.977    CXX test/cpp_headers/config.o
00:03:57.977    CXX test/cpp_headers/cpuset.o
00:03:58.236    LINK reconnect
00:03:58.236    CXX test/cpp_headers/crc16.o
00:03:58.236    LINK nvme_manage
00:03:58.236    LINK dif
00:03:58.236    LINK hotplug
00:03:58.236    LINK arbitration
00:03:58.236    CC test/event/scheduler/scheduler.o
00:03:58.236    CC test/nvme/sgl/sgl.o
00:03:58.236    LINK blobcli
00:03:58.236    CXX test/cpp_headers/crc32.o
00:03:58.236    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:58.494    CC test/nvme/e2edp/nvme_dp.o
00:03:58.494    CC examples/nvme/abort/abort.o
00:03:58.494    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:58.494    CXX test/cpp_headers/crc64.o
00:03:58.494    LINK sgl
00:03:58.494    LINK scheduler
00:03:58.494    LINK cmb_copy
00:03:58.494    CC test/blobfs/mkfs/mkfs.o
00:03:58.494    LINK nvme_dp
00:03:58.494    CC test/lvol/esnap/esnap.o
00:03:58.752    CC test/nvme/overhead/overhead.o
00:03:58.752    LINK abort
00:03:58.752    CXX test/cpp_headers/dif.o
00:03:58.752    LINK pmr_persistence
00:03:58.752    CXX test/cpp_headers/dma.o
00:03:58.752    CC test/nvme/err_injection/err_injection.o
00:03:58.752    LINK mkfs
00:03:58.752    CC test/nvme/startup/startup.o
00:03:58.752    CXX test/cpp_headers/endian.o
00:03:58.752    LINK overhead
00:03:58.752    CC test/bdev/bdevio/bdevio.o
00:03:59.013    CC test/nvme/reserve/reserve.o
00:03:59.013    LINK err_injection
00:03:59.013    CC test/nvme/simple_copy/simple_copy.o
00:03:59.013    CXX test/cpp_headers/env_dpdk.o
00:03:59.013    LINK startup
00:03:59.013    CC examples/bdev/hello_world/hello_bdev.o
00:03:59.013    CC examples/bdev/bdevperf/bdevperf.o
00:03:59.013    LINK reserve
00:03:59.013    CC test/nvme/connect_stress/connect_stress.o
00:03:59.271    LINK simple_copy
00:03:59.271    CXX test/cpp_headers/env.o
00:03:59.271    CC test/nvme/boot_partition/boot_partition.o
00:03:59.271    LINK bdevio
00:03:59.271    LINK hello_bdev
00:03:59.271    CC test/nvme/compliance/nvme_compliance.o
00:03:59.271    LINK connect_stress
00:03:59.271    CC test/nvme/fused_ordering/fused_ordering.o
00:03:59.271    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:59.271    CXX test/cpp_headers/event.o
00:03:59.271    LINK boot_partition
00:03:59.528    CC test/nvme/fdp/fdp.o
00:03:59.528    CXX test/cpp_headers/fd_group.o
00:03:59.528    CXX test/cpp_headers/fd.o
00:03:59.528    LINK fused_ordering
00:03:59.528    CC test/nvme/cuse/cuse.o
00:03:59.528    LINK bdevperf
00:03:59.528    LINK doorbell_aers
00:03:59.528    CXX test/cpp_headers/file.o
00:03:59.528    LINK nvme_compliance
00:03:59.528    CXX test/cpp_headers/fsdev.o
00:03:59.786    CXX test/cpp_headers/fsdev_module.o
00:03:59.786    CXX test/cpp_headers/ftl.o
00:03:59.786    LINK fdp
00:03:59.786    CXX test/cpp_headers/fuse_dispatcher.o
00:03:59.786    CXX test/cpp_headers/gpt_spec.o
00:03:59.786    CXX test/cpp_headers/hexlify.o
00:03:59.786    CXX test/cpp_headers/histogram_data.o
00:03:59.786    CXX test/cpp_headers/idxd.o
00:03:59.786    CXX test/cpp_headers/idxd_spec.o
00:03:59.786    CXX test/cpp_headers/init.o
00:03:59.786    CXX test/cpp_headers/ioat.o
00:04:00.044    CXX test/cpp_headers/ioat_spec.o
00:04:00.044    CXX test/cpp_headers/iscsi_spec.o
00:04:00.044    CXX test/cpp_headers/json.o
00:04:00.044    CXX test/cpp_headers/jsonrpc.o
00:04:00.044    CXX test/cpp_headers/keyring.o
00:04:00.044    CXX test/cpp_headers/keyring_module.o
00:04:00.044    CXX test/cpp_headers/likely.o
00:04:00.044    CXX test/cpp_headers/log.o
00:04:00.044    CC examples/nvmf/nvmf/nvmf.o
00:04:00.044    CXX test/cpp_headers/lvol.o
00:04:00.044    CXX test/cpp_headers/md5.o
00:04:00.302    CXX test/cpp_headers/memory.o
00:04:00.302    CXX test/cpp_headers/mmio.o
00:04:00.302    CXX test/cpp_headers/nbd.o
00:04:00.302    CXX test/cpp_headers/net.o
00:04:00.302    CXX test/cpp_headers/notify.o
00:04:00.302    CXX test/cpp_headers/nvme.o
00:04:00.302    CXX test/cpp_headers/nvme_intel.o
00:04:00.302    CXX test/cpp_headers/nvme_ocssd.o
00:04:00.302    CXX test/cpp_headers/nvme_ocssd_spec.o
00:04:00.302    LINK nvmf
00:04:00.302    CXX test/cpp_headers/nvme_spec.o
00:04:00.560    CXX test/cpp_headers/nvme_zns.o
00:04:00.560    CXX test/cpp_headers/nvmf_cmd.o
00:04:00.560    LINK cuse
00:04:00.560    CXX test/cpp_headers/nvmf_fc_spec.o
00:04:00.560    CXX test/cpp_headers/nvmf.o
00:04:00.560    CXX test/cpp_headers/nvmf_spec.o
00:04:00.560    CXX test/cpp_headers/nvmf_transport.o
00:04:00.560    CXX test/cpp_headers/opal.o
00:04:00.560    CXX test/cpp_headers/opal_spec.o
00:04:00.560    CXX test/cpp_headers/pci_ids.o
00:04:00.560    CXX test/cpp_headers/pipe.o
00:04:00.560    CXX test/cpp_headers/queue.o
00:04:00.818    CXX test/cpp_headers/reduce.o
00:04:00.818    CXX test/cpp_headers/rpc.o
00:04:00.818    CXX test/cpp_headers/scheduler.o
00:04:00.818    LINK esnap
00:04:00.818    CXX test/cpp_headers/scsi.o
00:04:00.818    CXX test/cpp_headers/scsi_spec.o
00:04:00.818    CXX test/cpp_headers/sock.o
00:04:00.818    CXX test/cpp_headers/string.o
00:04:00.818    CXX test/cpp_headers/stdinc.o
00:04:00.818    CXX test/cpp_headers/thread.o
00:04:00.818    CXX test/cpp_headers/trace.o
00:04:00.818    CXX test/cpp_headers/trace_parser.o
00:04:00.818    CXX test/cpp_headers/tree.o
00:04:01.076    CXX test/cpp_headers/ublk.o
00:04:01.076    CXX test/cpp_headers/util.o
00:04:01.076    CXX test/cpp_headers/uuid.o
00:04:01.076    CXX test/cpp_headers/version.o
00:04:01.076    CXX test/cpp_headers/vfio_user_pci.o
00:04:01.076    CXX test/cpp_headers/vfio_user_spec.o
00:04:01.076    CXX test/cpp_headers/vhost.o
00:04:01.076    CXX test/cpp_headers/vmd.o
00:04:01.076    CXX test/cpp_headers/xor.o
00:04:01.076    CXX test/cpp_headers/zipf.o
00:04:01.332  
00:04:01.332  real	1m25.882s
00:04:01.332  user	7m56.135s
00:04:01.332  sys	1m39.418s
00:04:01.332   07:48:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:01.332   07:48:17 make -- common/autotest_common.sh@10 -- $ set +x
00:04:01.332  ************************************
00:04:01.332  END TEST make
00:04:01.332  ************************************
00:04:01.332   07:48:17  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:04:01.332   07:48:17  -- pm/common@29 -- $ signal_monitor_resources TERM
00:04:01.332   07:48:17  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:04:01.332   07:48:17  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:01.332   07:48:17  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:04:01.332   07:48:17  -- pm/common@44 -- $ pid=5446
00:04:01.332   07:48:17  -- pm/common@50 -- $ kill -TERM 5446
00:04:01.332   07:48:17  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:01.332   07:48:17  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:04:01.332   07:48:17  -- pm/common@44 -- $ pid=5448
00:04:01.332   07:48:17  -- pm/common@50 -- $ kill -TERM 5448
00:04:01.332   07:48:17  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:04:01.332   07:48:17  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:04:01.333    07:48:17  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:01.333     07:48:17  -- common/autotest_common.sh@1693 -- # lcov --version
00:04:01.333     07:48:17  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:01.590    07:48:17  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:01.590    07:48:17  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:01.590    07:48:17  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:01.590    07:48:17  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:01.590    07:48:17  -- scripts/common.sh@336 -- # IFS=.-:
00:04:01.590    07:48:17  -- scripts/common.sh@336 -- # read -ra ver1
00:04:01.590    07:48:17  -- scripts/common.sh@337 -- # IFS=.-:
00:04:01.590    07:48:17  -- scripts/common.sh@337 -- # read -ra ver2
00:04:01.590    07:48:17  -- scripts/common.sh@338 -- # local 'op=<'
00:04:01.590    07:48:17  -- scripts/common.sh@340 -- # ver1_l=2
00:04:01.590    07:48:17  -- scripts/common.sh@341 -- # ver2_l=1
00:04:01.590    07:48:17  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:01.590    07:48:17  -- scripts/common.sh@344 -- # case "$op" in
00:04:01.590    07:48:17  -- scripts/common.sh@345 -- # : 1
00:04:01.590    07:48:17  -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:01.590    07:48:17  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:01.590     07:48:17  -- scripts/common.sh@365 -- # decimal 1
00:04:01.590     07:48:17  -- scripts/common.sh@353 -- # local d=1
00:04:01.590     07:48:17  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:01.590     07:48:17  -- scripts/common.sh@355 -- # echo 1
00:04:01.590    07:48:17  -- scripts/common.sh@365 -- # ver1[v]=1
00:04:01.590     07:48:17  -- scripts/common.sh@366 -- # decimal 2
00:04:01.590     07:48:17  -- scripts/common.sh@353 -- # local d=2
00:04:01.591     07:48:17  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:01.591     07:48:17  -- scripts/common.sh@355 -- # echo 2
00:04:01.591    07:48:17  -- scripts/common.sh@366 -- # ver2[v]=2
00:04:01.591    07:48:17  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:01.591    07:48:17  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:01.591    07:48:17  -- scripts/common.sh@368 -- # return 0
00:04:01.591    07:48:17  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:01.591    07:48:17  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:01.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.591  		--rc genhtml_branch_coverage=1
00:04:01.591  		--rc genhtml_function_coverage=1
00:04:01.591  		--rc genhtml_legend=1
00:04:01.591  		--rc geninfo_all_blocks=1
00:04:01.591  		--rc geninfo_unexecuted_blocks=1
00:04:01.591  		
00:04:01.591  		'
00:04:01.591    07:48:17  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:01.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.591  		--rc genhtml_branch_coverage=1
00:04:01.591  		--rc genhtml_function_coverage=1
00:04:01.591  		--rc genhtml_legend=1
00:04:01.591  		--rc geninfo_all_blocks=1
00:04:01.591  		--rc geninfo_unexecuted_blocks=1
00:04:01.591  		
00:04:01.591  		'
00:04:01.591    07:48:17  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:01.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.591  		--rc genhtml_branch_coverage=1
00:04:01.591  		--rc genhtml_function_coverage=1
00:04:01.591  		--rc genhtml_legend=1
00:04:01.591  		--rc geninfo_all_blocks=1
00:04:01.591  		--rc geninfo_unexecuted_blocks=1
00:04:01.591  		
00:04:01.591  		'
00:04:01.591    07:48:17  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:01.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:01.591  		--rc genhtml_branch_coverage=1
00:04:01.591  		--rc genhtml_function_coverage=1
00:04:01.591  		--rc genhtml_legend=1
00:04:01.591  		--rc geninfo_all_blocks=1
00:04:01.591  		--rc geninfo_unexecuted_blocks=1
00:04:01.591  		
00:04:01.591  		'
00:04:01.591   07:48:17  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:01.591     07:48:17  -- nvmf/common.sh@7 -- # uname -s
00:04:01.591    07:48:17  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:01.591    07:48:17  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:01.591    07:48:17  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:01.591    07:48:17  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:01.591    07:48:17  -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:01.591    07:48:17  -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS=
00:04:01.591    07:48:17  -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:01.591     07:48:17  -- nvmf/common.sh@15 -- # nvme gen-hostnqn
00:04:01.591    07:48:17  -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5ba723b5-059c-4de2-baf9-14c571c75cf0
00:04:01.591    07:48:17  -- nvmf/common.sh@16 -- # NVME_HOSTID=5ba723b5-059c-4de2-baf9-14c571c75cf0
00:04:01.591    07:48:17  -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:01.591    07:48:17  -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect'
00:04:01.591    07:48:17  -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback
00:04:01.591    07:48:17  -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:01.591    07:48:17  -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:01.591     07:48:17  -- scripts/common.sh@15 -- # shopt -s extglob
00:04:01.591     07:48:17  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:01.591     07:48:17  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:01.591     07:48:17  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:01.591      07:48:17  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:01.591      07:48:17  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:01.591      07:48:17  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:01.591      07:48:17  -- paths/export.sh@5 -- # export PATH
00:04:01.591      07:48:17  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:01.591    07:48:17  -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh
00:04:01.591     07:48:17  -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br
00:04:01.591     07:48:17  -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk
00:04:01.591     07:48:17  -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=()
00:04:01.591    07:48:17  -- nvmf/common.sh@50 -- # : 0
00:04:01.591    07:48:17  -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID
00:04:01.591    07:48:17  -- nvmf/common.sh@52 -- # build_nvmf_app_args
00:04:01.591    07:48:17  -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']'
00:04:01.591    07:48:17  -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:01.591    07:48:17  -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:01.591    07:48:17  -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']'
00:04:01.591  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected
00:04:01.591    07:48:17  -- nvmf/common.sh@35 -- # '[' -n '' ']'
00:04:01.591    07:48:17  -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']'
00:04:01.591    07:48:17  -- nvmf/common.sh@54 -- # have_pci_nics=0
00:04:01.591   07:48:17  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:04:01.591    07:48:17  -- spdk/autotest.sh@32 -- # uname -s
00:04:01.591   07:48:17  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:04:01.591   07:48:17  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:04:01.591   07:48:17  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:04:01.591   07:48:17  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:04:01.591   07:48:17  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:04:01.591   07:48:17  -- spdk/autotest.sh@44 -- # modprobe nbd
00:04:01.591    07:48:17  -- spdk/autotest.sh@46 -- # type -P udevadm
00:04:01.591   07:48:17  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:04:01.591   07:48:17  -- spdk/autotest.sh@48 -- # udevadm_pid=54303
00:04:01.591   07:48:17  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:04:01.591   07:48:17  -- pm/common@17 -- # local monitor
00:04:01.591   07:48:17  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:01.591    07:48:17  -- pm/common@21 -- # date +%s
00:04:01.591   07:48:17  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:04:01.591   07:48:17  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:01.591   07:48:17  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732088897
00:04:01.591    07:48:17  -- pm/common@21 -- # date +%s
00:04:01.591   07:48:17  -- pm/common@25 -- # sleep 1
00:04:01.591   07:48:17  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732088897
00:04:01.591  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732088897_collect-cpu-load.pm.log
00:04:01.591  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732088897_collect-vmstat.pm.log
00:04:02.526   07:48:18  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:04:02.526   07:48:18  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:04:02.526   07:48:18  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:02.526   07:48:18  -- common/autotest_common.sh@10 -- # set +x
00:04:02.526   07:48:18  -- spdk/autotest.sh@59 -- # create_test_list
00:04:02.526   07:48:18  -- common/autotest_common.sh@752 -- # xtrace_disable
00:04:02.526   07:48:18  -- common/autotest_common.sh@10 -- # set +x
00:04:02.785     07:48:18  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:04:02.785    07:48:18  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:04:02.785   07:48:18  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:04:02.785   07:48:18  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:04:02.785   07:48:18  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:04:02.785   07:48:18  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:04:02.785    07:48:18  -- common/autotest_common.sh@1457 -- # uname
00:04:02.785   07:48:18  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:04:02.785   07:48:18  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:04:02.785    07:48:18  -- common/autotest_common.sh@1477 -- # uname
00:04:02.785   07:48:18  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:04:02.785   07:48:18  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:04:02.785   07:48:18  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:04:02.785  lcov: LCOV version 1.15
00:04:02.785   07:48:18  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:04:17.663  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:04:17.663  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:04:32.537   07:48:47  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:04:32.537   07:48:47  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:32.537   07:48:47  -- common/autotest_common.sh@10 -- # set +x
00:04:32.537   07:48:47  -- spdk/autotest.sh@78 -- # rm -f
00:04:32.537   07:48:47  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:32.537  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:32.537  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:04:32.537  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:04:32.537   07:48:48  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:04:32.537   07:48:48  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:04:32.537   07:48:48  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:04:32.537   07:48:48  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:04:32.537   07:48:48  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:04:32.537   07:48:48  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:04:32.537   07:48:48  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:04:32.537   07:48:48  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:32.537   07:48:48  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:04:32.537   07:48:48  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:04:32.537   07:48:48  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1
00:04:32.537   07:48:48  -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:04:32.537   07:48:48  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:04:32.537   07:48:48  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:04:32.537   07:48:48  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:04:32.537   07:48:48  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:04:32.537   07:48:48  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:04:32.537   07:48:48  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:04:32.537   07:48:48  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:04:32.537   07:48:48  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:04:32.796  No valid GPT data, bailing
00:04:32.796    07:48:48  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:32.796   07:48:48  -- scripts/common.sh@394 -- # pt=
00:04:32.796   07:48:48  -- scripts/common.sh@395 -- # return 1
00:04:32.796   07:48:48  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:04:32.796  1+0 records in
00:04:32.796  1+0 records out
00:04:32.796  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00546797 s, 192 MB/s
00:04:32.796   07:48:48  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:04:32.796   07:48:48  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:04:32.796   07:48:48  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:04:32.796   07:48:48  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:04:32.796   07:48:48  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:04:32.796  No valid GPT data, bailing
00:04:32.796    07:48:48  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:04:32.796   07:48:48  -- scripts/common.sh@394 -- # pt=
00:04:32.796   07:48:48  -- scripts/common.sh@395 -- # return 1
00:04:32.796   07:48:48  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:04:32.796  1+0 records in
00:04:32.796  1+0 records out
00:04:32.796  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048509 s, 216 MB/s
00:04:32.796   07:48:48  -- spdk/autotest.sh@105 -- # sync
00:04:33.055   07:48:48  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:04:33.055   07:48:48  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:04:33.055    07:48:48  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:04:34.962    07:48:50  -- spdk/autotest.sh@111 -- # uname -s
00:04:34.962   07:48:50  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:04:34.962   07:48:50  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:04:34.962   07:48:50  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:04:35.529  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:35.529  Hugepages
00:04:35.529  node     hugesize     free /  total
00:04:35.529  node0   1048576kB        0 /      0
00:04:35.529  node0      2048kB        0 /      0
00:04:35.529  
00:04:35.529  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:35.529  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:04:35.788  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:04:35.788  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:04:35.788    07:48:51  -- spdk/autotest.sh@117 -- # uname -s
00:04:35.788   07:48:51  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:04:35.788   07:48:51  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:04:35.788   07:48:51  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:36.356  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:36.356  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:04:36.356  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:04:36.615   07:48:52  -- common/autotest_common.sh@1517 -- # sleep 1
00:04:37.550   07:48:53  -- common/autotest_common.sh@1518 -- # bdfs=()
00:04:37.550   07:48:53  -- common/autotest_common.sh@1518 -- # local bdfs
00:04:37.550   07:48:53  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:04:37.550    07:48:53  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:04:37.550    07:48:53  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:37.550    07:48:53  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:37.550    07:48:53  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:37.550     07:48:53  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:37.550     07:48:53  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:04:37.550    07:48:53  -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:04:37.550    07:48:53  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:04:37.550   07:48:53  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:37.808  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:38.066  Waiting for block devices as requested
00:04:38.066  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:04:38.066  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:04:38.066   07:48:54  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:04:38.066    07:48:54  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:04:38.066     07:48:54  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:04:38.066     07:48:54  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:04:38.066    07:48:54  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:04:38.066    07:48:54  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]]
00:04:38.326     07:48:54  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:04:38.326    07:48:54  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1
00:04:38.326   07:48:54  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1
00:04:38.326   07:48:54  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]]
00:04:38.326    07:48:54  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1
00:04:38.326    07:48:54  -- common/autotest_common.sh@1531 -- # grep oacs
00:04:38.326    07:48:54  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:04:38.326   07:48:54  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:04:38.326   07:48:54  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:04:38.326   07:48:54  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:04:38.326    07:48:54  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:04:38.326    07:48:54  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1
00:04:38.326    07:48:54  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:38.326   07:48:54  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:04:38.326   07:48:54  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:04:38.326   07:48:54  -- common/autotest_common.sh@1543 -- # continue
00:04:38.326   07:48:54  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:04:38.326    07:48:54  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0
00:04:38.326     07:48:54  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:04:38.326     07:48:54  -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme
00:04:38.326    07:48:54  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:04:38.326    07:48:54  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]]
00:04:38.326     07:48:54  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:04:38.326    07:48:54  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:04:38.326   07:48:54  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:04:38.326   07:48:54  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:04:38.326    07:48:54  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:04:38.326    07:48:54  -- common/autotest_common.sh@1531 -- # grep oacs
00:04:38.326    07:48:54  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:04:38.326   07:48:54  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:04:38.326   07:48:54  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:04:38.326   07:48:54  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:04:38.326    07:48:54  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:04:38.326    07:48:54  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:04:38.326    07:48:54  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:38.326   07:48:54  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:04:38.326   07:48:54  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:04:38.326   07:48:54  -- common/autotest_common.sh@1543 -- # continue
00:04:38.326   07:48:54  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:04:38.326   07:48:54  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:38.326   07:48:54  -- common/autotest_common.sh@10 -- # set +x
00:04:38.326   07:48:54  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:04:38.326   07:48:54  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:38.326   07:48:54  -- common/autotest_common.sh@10 -- # set +x
00:04:38.326   07:48:54  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:04:38.893  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:04:38.893  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:04:38.893  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:04:39.152   07:48:54  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:04:39.152   07:48:54  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:39.152   07:48:54  -- common/autotest_common.sh@10 -- # set +x
00:04:39.152   07:48:54  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:04:39.152   07:48:54  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:04:39.152    07:48:54  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:04:39.152    07:48:54  -- common/autotest_common.sh@1563 -- # bdfs=()
00:04:39.152    07:48:54  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:04:39.152    07:48:54  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:04:39.152    07:48:54  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:04:39.152     07:48:54  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:04:39.152     07:48:54  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:39.152     07:48:54  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:39.152     07:48:54  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:39.152      07:48:54  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:04:39.152      07:48:54  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:39.152     07:48:55  -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:04:39.152     07:48:55  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:04:39.152    07:48:55  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:04:39.152     07:48:55  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:04:39.152    07:48:55  -- common/autotest_common.sh@1566 -- # device=0x0010
00:04:39.152    07:48:55  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:04:39.152    07:48:55  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:04:39.152     07:48:55  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device
00:04:39.152    07:48:55  -- common/autotest_common.sh@1566 -- # device=0x0010
00:04:39.152    07:48:55  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:04:39.152    07:48:55  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:04:39.152    07:48:55  -- common/autotest_common.sh@1572 -- # return 0
00:04:39.152   07:48:55  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:04:39.152   07:48:55  -- common/autotest_common.sh@1580 -- # return 0
00:04:39.152   07:48:55  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:04:39.152   07:48:55  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:04:39.152   07:48:55  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:39.152   07:48:55  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:39.152   07:48:55  -- spdk/autotest.sh@149 -- # timing_enter lib
00:04:39.152   07:48:55  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:39.152   07:48:55  -- common/autotest_common.sh@10 -- # set +x
00:04:39.152   07:48:55  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:04:39.152   07:48:55  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:04:39.152   07:48:55  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:39.152   07:48:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:39.152   07:48:55  -- common/autotest_common.sh@10 -- # set +x
00:04:39.152  ************************************
00:04:39.152  START TEST env
00:04:39.152  ************************************
00:04:39.152   07:48:55 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:04:39.152  * Looking for test storage...
00:04:39.152  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:04:39.152    07:48:55 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:39.152     07:48:55 env -- common/autotest_common.sh@1693 -- # lcov --version
00:04:39.152     07:48:55 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:39.411    07:48:55 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:39.411    07:48:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:39.411    07:48:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:39.411    07:48:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:39.411    07:48:55 env -- scripts/common.sh@336 -- # IFS=.-:
00:04:39.411    07:48:55 env -- scripts/common.sh@336 -- # read -ra ver1
00:04:39.411    07:48:55 env -- scripts/common.sh@337 -- # IFS=.-:
00:04:39.411    07:48:55 env -- scripts/common.sh@337 -- # read -ra ver2
00:04:39.411    07:48:55 env -- scripts/common.sh@338 -- # local 'op=<'
00:04:39.411    07:48:55 env -- scripts/common.sh@340 -- # ver1_l=2
00:04:39.411    07:48:55 env -- scripts/common.sh@341 -- # ver2_l=1
00:04:39.411    07:48:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:39.411    07:48:55 env -- scripts/common.sh@344 -- # case "$op" in
00:04:39.411    07:48:55 env -- scripts/common.sh@345 -- # : 1
00:04:39.411    07:48:55 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:39.411    07:48:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:39.411     07:48:55 env -- scripts/common.sh@365 -- # decimal 1
00:04:39.411     07:48:55 env -- scripts/common.sh@353 -- # local d=1
00:04:39.411     07:48:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:39.411     07:48:55 env -- scripts/common.sh@355 -- # echo 1
00:04:39.411    07:48:55 env -- scripts/common.sh@365 -- # ver1[v]=1
00:04:39.411     07:48:55 env -- scripts/common.sh@366 -- # decimal 2
00:04:39.411     07:48:55 env -- scripts/common.sh@353 -- # local d=2
00:04:39.411     07:48:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:39.411     07:48:55 env -- scripts/common.sh@355 -- # echo 2
00:04:39.411    07:48:55 env -- scripts/common.sh@366 -- # ver2[v]=2
00:04:39.411    07:48:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:39.411    07:48:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:39.411    07:48:55 env -- scripts/common.sh@368 -- # return 0
00:04:39.411    07:48:55 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:39.411    07:48:55 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:39.411  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:39.411  		--rc genhtml_branch_coverage=1
00:04:39.411  		--rc genhtml_function_coverage=1
00:04:39.411  		--rc genhtml_legend=1
00:04:39.411  		--rc geninfo_all_blocks=1
00:04:39.411  		--rc geninfo_unexecuted_blocks=1
00:04:39.411  		
00:04:39.411  		'
00:04:39.411    07:48:55 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:39.411  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:39.411  		--rc genhtml_branch_coverage=1
00:04:39.411  		--rc genhtml_function_coverage=1
00:04:39.411  		--rc genhtml_legend=1
00:04:39.411  		--rc geninfo_all_blocks=1
00:04:39.411  		--rc geninfo_unexecuted_blocks=1
00:04:39.411  		
00:04:39.411  		'
00:04:39.411    07:48:55 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:39.411  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:39.411  		--rc genhtml_branch_coverage=1
00:04:39.411  		--rc genhtml_function_coverage=1
00:04:39.411  		--rc genhtml_legend=1
00:04:39.411  		--rc geninfo_all_blocks=1
00:04:39.411  		--rc geninfo_unexecuted_blocks=1
00:04:39.411  		
00:04:39.411  		'
00:04:39.411    07:48:55 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:39.411  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:39.411  		--rc genhtml_branch_coverage=1
00:04:39.411  		--rc genhtml_function_coverage=1
00:04:39.411  		--rc genhtml_legend=1
00:04:39.411  		--rc geninfo_all_blocks=1
00:04:39.411  		--rc geninfo_unexecuted_blocks=1
00:04:39.411  		
00:04:39.411  		'
00:04:39.411   07:48:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:04:39.411   07:48:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:39.411   07:48:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:39.411   07:48:55 env -- common/autotest_common.sh@10 -- # set +x
00:04:39.411  ************************************
00:04:39.411  START TEST env_memory
00:04:39.411  ************************************
00:04:39.411   07:48:55 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:04:39.411  
00:04:39.411  
00:04:39.411       CUnit - A unit testing framework for C - Version 2.1-3
00:04:39.411       http://cunit.sourceforge.net/
00:04:39.411  
00:04:39.411  
00:04:39.411  Suite: memory
00:04:39.411    Test: alloc and free memory map ...[2024-11-20 07:48:55.303727] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:04:39.411  passed
00:04:39.411    Test: mem map translation ...[2024-11-20 07:48:55.319549] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:04:39.411  [2024-11-20 07:48:55.319894] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:04:39.411  [2024-11-20 07:48:55.320256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:04:39.411  [2024-11-20 07:48:55.320556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:04:39.411  passed
00:04:39.411    Test: mem map registration ...[2024-11-20 07:48:55.341773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:04:39.411  [2024-11-20 07:48:55.342084] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:04:39.411  passed
00:04:39.411    Test: mem map adjacent registrations ...passed
00:04:39.411  
00:04:39.412  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:39.412                suites      1      1    n/a      0        0
00:04:39.412                 tests      4      4      4      0        0
00:04:39.412               asserts    152    152    152      0      n/a
00:04:39.412  
00:04:39.412  Elapsed time =    0.076 seconds
00:04:39.412  
00:04:39.412  real	0m0.090s
00:04:39.412  user	0m0.070s
00:04:39.412  sys	0m0.015s
00:04:39.412   07:48:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:39.412   07:48:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:04:39.412  ************************************
00:04:39.412  END TEST env_memory
00:04:39.412  ************************************
00:04:39.412   07:48:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:04:39.412   07:48:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:39.412   07:48:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:39.412   07:48:55 env -- common/autotest_common.sh@10 -- # set +x
00:04:39.412  ************************************
00:04:39.412  START TEST env_vtophys
00:04:39.412  ************************************
00:04:39.412   07:48:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:04:39.412  EAL: lib.eal log level changed from notice to debug
00:04:39.412  EAL: Detected lcore 0 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 1 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 2 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 3 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 4 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 5 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 6 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 7 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 8 as core 0 on socket 0
00:04:39.412  EAL: Detected lcore 9 as core 0 on socket 0
00:04:39.671  EAL: Maximum logical cores by configuration: 128
00:04:39.671  EAL: Detected CPU lcores: 10
00:04:39.671  EAL: Detected NUMA nodes: 1
00:04:39.671  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:04:39.671  EAL: Detected shared linkage of DPDK
00:04:39.671  EAL: No shared files mode enabled, IPC will be disabled
00:04:39.671  EAL: Selected IOVA mode 'PA'
00:04:39.671  EAL: Probing VFIO support...
00:04:39.671  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:04:39.671  EAL: VFIO modules not loaded, skipping VFIO support...
00:04:39.671  EAL: Ask a virtual area of 0x2e000 bytes
00:04:39.671  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:04:39.671  EAL: Setting up physically contiguous memory...
00:04:39.671  EAL: Setting maximum number of open files to 524288
00:04:39.671  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:04:39.671  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:04:39.671  EAL: Ask a virtual area of 0x61000 bytes
00:04:39.671  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:04:39.671  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:39.671  EAL: Ask a virtual area of 0x400000000 bytes
00:04:39.671  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:04:39.671  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:04:39.671  EAL: Ask a virtual area of 0x61000 bytes
00:04:39.671  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:04:39.671  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:39.671  EAL: Ask a virtual area of 0x400000000 bytes
00:04:39.671  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:04:39.671  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:04:39.671  EAL: Ask a virtual area of 0x61000 bytes
00:04:39.671  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:04:39.671  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:39.671  EAL: Ask a virtual area of 0x400000000 bytes
00:04:39.671  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:04:39.671  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:04:39.671  EAL: Ask a virtual area of 0x61000 bytes
00:04:39.671  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:04:39.671  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:39.671  EAL: Ask a virtual area of 0x400000000 bytes
00:04:39.671  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:04:39.671  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:04:39.671  EAL: Hugepages will be freed exactly as allocated.
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: TSC frequency is ~2200000 KHz
00:04:39.671  EAL: Main lcore 0 is ready (tid=7f07a8d3c9c0;cpuset=[0])
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 0
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 2MB
00:04:39.671  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:04:39.671  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:04:39.671  EAL: Mem event callback 'spdk:(nil)' registered
00:04:39.671  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:04:39.671  
00:04:39.671  
00:04:39.671       CUnit - A unit testing framework for C - Version 2.1-3
00:04:39.671       http://cunit.sourceforge.net/
00:04:39.671  
00:04:39.671  
00:04:39.671  Suite: components_suite
00:04:39.671    Test: vtophys_malloc_test ...passed
00:04:39.671    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 4MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was shrunk by 4MB
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 6MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was shrunk by 6MB
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 10MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was shrunk by 10MB
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 18MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was shrunk by 18MB
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 34MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was shrunk by 34MB
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 66MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was shrunk by 66MB
00:04:39.671  EAL: Trying to obtain current memory policy.
00:04:39.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.671  EAL: Restoring previous memory policy: 4
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.671  EAL: request: mp_malloc_sync
00:04:39.671  EAL: No shared files mode enabled, IPC is disabled
00:04:39.671  EAL: Heap on socket 0 was expanded by 130MB
00:04:39.671  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.930  EAL: request: mp_malloc_sync
00:04:39.930  EAL: No shared files mode enabled, IPC is disabled
00:04:39.930  EAL: Heap on socket 0 was shrunk by 130MB
00:04:39.930  EAL: Trying to obtain current memory policy.
00:04:39.930  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:39.930  EAL: Restoring previous memory policy: 4
00:04:39.930  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.930  EAL: request: mp_malloc_sync
00:04:39.930  EAL: No shared files mode enabled, IPC is disabled
00:04:39.930  EAL: Heap on socket 0 was expanded by 258MB
00:04:39.930  EAL: Calling mem event callback 'spdk:(nil)'
00:04:39.930  EAL: request: mp_malloc_sync
00:04:39.930  EAL: No shared files mode enabled, IPC is disabled
00:04:39.930  EAL: Heap on socket 0 was shrunk by 258MB
00:04:39.930  EAL: Trying to obtain current memory policy.
00:04:39.930  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:40.188  EAL: Restoring previous memory policy: 4
00:04:40.188  EAL: Calling mem event callback 'spdk:(nil)'
00:04:40.188  EAL: request: mp_malloc_sync
00:04:40.188  EAL: No shared files mode enabled, IPC is disabled
00:04:40.188  EAL: Heap on socket 0 was expanded by 514MB
00:04:40.188  EAL: Calling mem event callback 'spdk:(nil)'
00:04:40.446  EAL: request: mp_malloc_sync
00:04:40.446  EAL: No shared files mode enabled, IPC is disabled
00:04:40.446  EAL: Heap on socket 0 was shrunk by 514MB
00:04:40.446  EAL: Trying to obtain current memory policy.
00:04:40.446  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:40.704  EAL: Restoring previous memory policy: 4
00:04:40.704  EAL: Calling mem event callback 'spdk:(nil)'
00:04:40.704  EAL: request: mp_malloc_sync
00:04:40.704  EAL: No shared files mode enabled, IPC is disabled
00:04:40.704  EAL: Heap on socket 0 was expanded by 1026MB
00:04:40.704  EAL: Calling mem event callback 'spdk:(nil)'
00:04:40.962  EAL: request: mp_malloc_sync
00:04:40.962  passed
00:04:40.962  
00:04:40.962  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:40.962                suites      1      1    n/a      0        0
00:04:40.962                 tests      2      2      2      0        0
00:04:40.962               asserts   5449   5449   5449      0      n/a
00:04:40.962  
00:04:40.962  Elapsed time =    1.287 seconds
00:04:40.962  EAL: No shared files mode enabled, IPC is disabled
00:04:40.962  EAL: Heap on socket 0 was shrunk by 1026MB
00:04:40.962  EAL: Calling mem event callback 'spdk:(nil)'
00:04:40.962  EAL: request: mp_malloc_sync
00:04:40.962  EAL: No shared files mode enabled, IPC is disabled
00:04:40.962  EAL: Heap on socket 0 was shrunk by 2MB
00:04:40.962  EAL: No shared files mode enabled, IPC is disabled
00:04:40.962  EAL: No shared files mode enabled, IPC is disabled
00:04:40.962  EAL: No shared files mode enabled, IPC is disabled
00:04:40.962  
00:04:40.962  real	0m1.485s
00:04:40.962  user	0m0.809s
00:04:40.962  sys	0m0.538s
00:04:40.962   07:48:56 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:40.962   07:48:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:04:40.962  ************************************
00:04:40.962  END TEST env_vtophys
00:04:40.963  ************************************
00:04:40.963   07:48:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:04:40.963   07:48:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:40.963   07:48:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:40.963   07:48:56 env -- common/autotest_common.sh@10 -- # set +x
00:04:40.963  ************************************
00:04:40.963  START TEST env_pci
00:04:40.963  ************************************
00:04:40.963   07:48:56 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:04:40.963  
00:04:40.963  
00:04:40.963       CUnit - A unit testing framework for C - Version 2.1-3
00:04:40.963       http://cunit.sourceforge.net/
00:04:40.963  
00:04:40.963  
00:04:40.963  Suite: pci
00:04:40.963    Test: pci_hook ...[2024-11-20 07:48:56.980419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56466 has claimed it
00:04:40.963  passed
00:04:40.963  
00:04:40.963  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:40.963                suites      1      1    n/a      0        0
00:04:40.963                 tests      1      1      1      0        0
00:04:40.963               asserts     25     25     25      0      n/a
00:04:40.963  
00:04:40.963  Elapsed time =    0.002 seconds
00:04:40.963  EAL: Cannot find device (10000:00:01.0)
00:04:40.963  EAL: Failed to attach device on primary process
00:04:40.963  
00:04:40.963  real	0m0.018s
00:04:40.963  user	0m0.005s
00:04:40.963  sys	0m0.013s
00:04:40.963  ************************************
00:04:40.963  END TEST env_pci
00:04:40.963  ************************************
00:04:40.963   07:48:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:40.963   07:48:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:04:41.221   07:48:57 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:04:41.221    07:48:57 env -- env/env.sh@15 -- # uname
00:04:41.221   07:48:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:04:41.221   07:48:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:04:41.221   07:48:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:41.221   07:48:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:04:41.221   07:48:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:41.221   07:48:57 env -- common/autotest_common.sh@10 -- # set +x
00:04:41.221  ************************************
00:04:41.221  START TEST env_dpdk_post_init
00:04:41.221  ************************************
00:04:41.221   07:48:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:41.221  EAL: Detected CPU lcores: 10
00:04:41.221  EAL: Detected NUMA nodes: 1
00:04:41.221  EAL: Detected shared linkage of DPDK
00:04:41.221  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:41.221  EAL: Selected IOVA mode 'PA'
00:04:41.221  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:41.221  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:04:41.221  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1)
00:04:41.221  Starting DPDK initialization...
00:04:41.221  Starting SPDK post initialization...
00:04:41.221  SPDK NVMe probe
00:04:41.221  Attaching to 0000:00:10.0
00:04:41.221  Attaching to 0000:00:11.0
00:04:41.221  Attached to 0000:00:10.0
00:04:41.221  Attached to 0000:00:11.0
00:04:41.221  Cleaning up...
00:04:41.221  
00:04:41.221  real	0m0.168s
00:04:41.221  user	0m0.032s
00:04:41.221  sys	0m0.039s
00:04:41.221   07:48:57 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:41.221   07:48:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:04:41.221  ************************************
00:04:41.221  END TEST env_dpdk_post_init
00:04:41.221  ************************************
00:04:41.221    07:48:57 env -- env/env.sh@26 -- # uname
00:04:41.480   07:48:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:04:41.480   07:48:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:04:41.480   07:48:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:41.480   07:48:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:41.480   07:48:57 env -- common/autotest_common.sh@10 -- # set +x
00:04:41.480  ************************************
00:04:41.480  START TEST env_mem_callbacks
00:04:41.480  ************************************
00:04:41.480   07:48:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:04:41.480  EAL: Detected CPU lcores: 10
00:04:41.480  EAL: Detected NUMA nodes: 1
00:04:41.480  EAL: Detected shared linkage of DPDK
00:04:41.480  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:41.480  EAL: Selected IOVA mode 'PA'
00:04:41.480  
00:04:41.480  
00:04:41.480       CUnit - A unit testing framework for C - Version 2.1-3
00:04:41.480       http://cunit.sourceforge.net/
00:04:41.480  
00:04:41.480  
00:04:41.480  Suite: memory
00:04:41.480    Test: test ...
00:04:41.480  register 0x200000200000 2097152
00:04:41.480  malloc 3145728
00:04:41.480  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:41.480  register 0x200000400000 4194304
00:04:41.480  buf 0x200000500000 len 3145728 PASSED
00:04:41.480  malloc 64
00:04:41.480  buf 0x2000004fff40 len 64 PASSED
00:04:41.480  malloc 4194304
00:04:41.480  register 0x200000800000 6291456
00:04:41.480  buf 0x200000a00000 len 4194304 PASSED
00:04:41.480  free 0x200000500000 3145728
00:04:41.480  free 0x2000004fff40 64
00:04:41.480  unregister 0x200000400000 4194304 PASSED
00:04:41.480  free 0x200000a00000 4194304
00:04:41.480  unregister 0x200000800000 6291456 PASSED
00:04:41.480  malloc 8388608
00:04:41.480  register 0x200000400000 10485760
00:04:41.480  buf 0x200000600000 len 8388608 PASSED
00:04:41.480  free 0x200000600000 8388608
00:04:41.480  unregister 0x200000400000 10485760 PASSED
00:04:41.480  passed
00:04:41.480  
00:04:41.480  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:41.480                suites      1      1    n/a      0        0
00:04:41.480                 tests      1      1      1      0        0
00:04:41.480               asserts     15     15     15      0      n/a
00:04:41.480  
00:04:41.480  Elapsed time =    0.008 seconds
00:04:41.480  
00:04:41.480  real	0m0.141s
00:04:41.480  user	0m0.018s
00:04:41.480  sys	0m0.021s
00:04:41.480   07:48:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:41.480   07:48:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:04:41.480  ************************************
00:04:41.480  END TEST env_mem_callbacks
00:04:41.480  ************************************
00:04:41.480  ************************************
00:04:41.480  END TEST env
00:04:41.480  ************************************
00:04:41.480  
00:04:41.480  real	0m2.384s
00:04:41.480  user	0m1.147s
00:04:41.480  sys	0m0.874s
00:04:41.480   07:48:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:41.480   07:48:57 env -- common/autotest_common.sh@10 -- # set +x
00:04:41.480   07:48:57  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:04:41.480   07:48:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:41.480   07:48:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:41.480   07:48:57  -- common/autotest_common.sh@10 -- # set +x
00:04:41.480  ************************************
00:04:41.480  START TEST rpc
00:04:41.480  ************************************
00:04:41.480   07:48:57 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:04:41.738  * Looking for test storage...
00:04:41.738  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:04:41.738    07:48:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:41.738     07:48:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:04:41.738     07:48:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:41.738    07:48:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:41.738    07:48:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:41.738    07:48:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:41.738    07:48:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:41.738    07:48:57 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:41.738    07:48:57 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:41.738    07:48:57 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:41.738    07:48:57 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:41.738    07:48:57 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:41.738    07:48:57 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:41.738    07:48:57 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:41.738    07:48:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:41.738    07:48:57 rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:41.738    07:48:57 rpc -- scripts/common.sh@345 -- # : 1
00:04:41.738    07:48:57 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:41.738    07:48:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:41.738     07:48:57 rpc -- scripts/common.sh@365 -- # decimal 1
00:04:41.738     07:48:57 rpc -- scripts/common.sh@353 -- # local d=1
00:04:41.738     07:48:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:41.738     07:48:57 rpc -- scripts/common.sh@355 -- # echo 1
00:04:41.738    07:48:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:41.738     07:48:57 rpc -- scripts/common.sh@366 -- # decimal 2
00:04:41.739     07:48:57 rpc -- scripts/common.sh@353 -- # local d=2
00:04:41.739     07:48:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:41.739     07:48:57 rpc -- scripts/common.sh@355 -- # echo 2
00:04:41.739    07:48:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:41.739    07:48:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:41.739  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:41.739    07:48:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:41.739    07:48:57 rpc -- scripts/common.sh@368 -- # return 0
00:04:41.739    07:48:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:41.739    07:48:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:41.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:41.739  		--rc genhtml_branch_coverage=1
00:04:41.739  		--rc genhtml_function_coverage=1
00:04:41.739  		--rc genhtml_legend=1
00:04:41.739  		--rc geninfo_all_blocks=1
00:04:41.739  		--rc geninfo_unexecuted_blocks=1
00:04:41.739  		
00:04:41.739  		'
00:04:41.739    07:48:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:41.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:41.739  		--rc genhtml_branch_coverage=1
00:04:41.739  		--rc genhtml_function_coverage=1
00:04:41.739  		--rc genhtml_legend=1
00:04:41.739  		--rc geninfo_all_blocks=1
00:04:41.739  		--rc geninfo_unexecuted_blocks=1
00:04:41.739  		
00:04:41.739  		'
00:04:41.739    07:48:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:41.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:41.739  		--rc genhtml_branch_coverage=1
00:04:41.739  		--rc genhtml_function_coverage=1
00:04:41.739  		--rc genhtml_legend=1
00:04:41.739  		--rc geninfo_all_blocks=1
00:04:41.739  		--rc geninfo_unexecuted_blocks=1
00:04:41.739  		
00:04:41.739  		'
00:04:41.739    07:48:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:41.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:41.739  		--rc genhtml_branch_coverage=1
00:04:41.739  		--rc genhtml_function_coverage=1
00:04:41.739  		--rc genhtml_legend=1
00:04:41.739  		--rc geninfo_all_blocks=1
00:04:41.739  		--rc geninfo_unexecuted_blocks=1
00:04:41.739  		
00:04:41.739  		'
00:04:41.739   07:48:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56584
00:04:41.739   07:48:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:41.739   07:48:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56584
00:04:41.739   07:48:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:04:41.739   07:48:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 56584 ']'
00:04:41.739   07:48:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:41.739   07:48:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:41.739   07:48:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:41.739   07:48:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:41.739   07:48:57 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:41.739  [2024-11-20 07:48:57.732433] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:04:41.739  [2024-11-20 07:48:57.732745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56584 ]
00:04:41.997  [2024-11-20 07:48:57.867282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:41.997  [2024-11-20 07:48:57.926681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:04:41.997  [2024-11-20 07:48:57.926992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56584' to capture a snapshot of events at runtime.
00:04:41.997  [2024-11-20 07:48:57.927149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:04:41.997  [2024-11-20 07:48:57.927201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:04:41.997  [2024-11-20 07:48:57.927367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56584 for offline analysis/debug.
00:04:41.997  [2024-11-20 07:48:57.927722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:42.256   07:48:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:42.256   07:48:58 rpc -- common/autotest_common.sh@868 -- # return 0
00:04:42.256   07:48:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:04:42.256   07:48:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:04:42.256   07:48:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:04:42.256   07:48:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:04:42.256   07:48:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:42.256   07:48:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:42.256   07:48:58 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:42.257  ************************************
00:04:42.257  START TEST rpc_integrity
00:04:42.257  ************************************
00:04:42.257   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:42.257    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.257   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:42.257    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:42.257   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:42.257    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.257   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:04:42.257    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.257    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.516    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.516   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:42.516  {
00:04:42.516  "name": "Malloc0",
00:04:42.516  "aliases": [
00:04:42.516  "2c8b1181-ffa7-473e-b486-9ff809c60e6b"
00:04:42.516  ],
00:04:42.516  "product_name": "Malloc disk",
00:04:42.516  "block_size": 512,
00:04:42.516  "num_blocks": 16384,
00:04:42.516  "uuid": "2c8b1181-ffa7-473e-b486-9ff809c60e6b",
00:04:42.516  "assigned_rate_limits": {
00:04:42.516  "rw_ios_per_sec": 0,
00:04:42.516  "rw_mbytes_per_sec": 0,
00:04:42.516  "r_mbytes_per_sec": 0,
00:04:42.516  "w_mbytes_per_sec": 0
00:04:42.516  },
00:04:42.516  "claimed": false,
00:04:42.516  "zoned": false,
00:04:42.516  "supported_io_types": {
00:04:42.516  "read": true,
00:04:42.516  "write": true,
00:04:42.516  "unmap": true,
00:04:42.516  "flush": true,
00:04:42.516  "reset": true,
00:04:42.516  "nvme_admin": false,
00:04:42.516  "nvme_io": false,
00:04:42.516  "nvme_io_md": false,
00:04:42.516  "write_zeroes": true,
00:04:42.516  "zcopy": true,
00:04:42.516  "get_zone_info": false,
00:04:42.516  "zone_management": false,
00:04:42.516  "zone_append": false,
00:04:42.516  "compare": false,
00:04:42.516  "compare_and_write": false,
00:04:42.516  "abort": true,
00:04:42.516  "seek_hole": false,
00:04:42.516  "seek_data": false,
00:04:42.516  "copy": true,
00:04:42.516  "nvme_iov_md": false
00:04:42.516  },
00:04:42.516  "memory_domains": [
00:04:42.516  {
00:04:42.516  "dma_device_id": "system",
00:04:42.516  "dma_device_type": 1
00:04:42.516  },
00:04:42.516  {
00:04:42.516  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:42.516  "dma_device_type": 2
00:04:42.516  }
00:04:42.516  ],
00:04:42.516  "driver_specific": {}
00:04:42.516  }
00:04:42.516  ]'
00:04:42.516    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:42.516   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:42.516   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:04:42.516   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.516   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.516  [2024-11-20 07:48:58.377322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:04:42.516  [2024-11-20 07:48:58.377368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:42.516  [2024-11-20 07:48:58.377384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec8340
00:04:42.516  [2024-11-20 07:48:58.377396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:42.516  [2024-11-20 07:48:58.378409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:42.516  [2024-11-20 07:48:58.378439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:42.516  Passthru0
00:04:42.516   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.516    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:42.516    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.516    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.516    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.516   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:42.516  {
00:04:42.516  "name": "Malloc0",
00:04:42.516  "aliases": [
00:04:42.516  "2c8b1181-ffa7-473e-b486-9ff809c60e6b"
00:04:42.516  ],
00:04:42.517  "product_name": "Malloc disk",
00:04:42.517  "block_size": 512,
00:04:42.517  "num_blocks": 16384,
00:04:42.517  "uuid": "2c8b1181-ffa7-473e-b486-9ff809c60e6b",
00:04:42.517  "assigned_rate_limits": {
00:04:42.517  "rw_ios_per_sec": 0,
00:04:42.517  "rw_mbytes_per_sec": 0,
00:04:42.517  "r_mbytes_per_sec": 0,
00:04:42.517  "w_mbytes_per_sec": 0
00:04:42.517  },
00:04:42.517  "claimed": true,
00:04:42.517  "claim_type": "exclusive_write",
00:04:42.517  "zoned": false,
00:04:42.517  "supported_io_types": {
00:04:42.517  "read": true,
00:04:42.517  "write": true,
00:04:42.517  "unmap": true,
00:04:42.517  "flush": true,
00:04:42.517  "reset": true,
00:04:42.517  "nvme_admin": false,
00:04:42.517  "nvme_io": false,
00:04:42.517  "nvme_io_md": false,
00:04:42.517  "write_zeroes": true,
00:04:42.517  "zcopy": true,
00:04:42.517  "get_zone_info": false,
00:04:42.517  "zone_management": false,
00:04:42.517  "zone_append": false,
00:04:42.517  "compare": false,
00:04:42.517  "compare_and_write": false,
00:04:42.517  "abort": true,
00:04:42.517  "seek_hole": false,
00:04:42.517  "seek_data": false,
00:04:42.517  "copy": true,
00:04:42.517  "nvme_iov_md": false
00:04:42.517  },
00:04:42.517  "memory_domains": [
00:04:42.517  {
00:04:42.517  "dma_device_id": "system",
00:04:42.517  "dma_device_type": 1
00:04:42.517  },
00:04:42.517  {
00:04:42.517  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:42.517  "dma_device_type": 2
00:04:42.517  }
00:04:42.517  ],
00:04:42.517  "driver_specific": {}
00:04:42.517  },
00:04:42.517  {
00:04:42.517  "name": "Passthru0",
00:04:42.517  "aliases": [
00:04:42.517  "7f1b2714-66c4-5aa9-8fed-d09aa66ac013"
00:04:42.517  ],
00:04:42.517  "product_name": "passthru",
00:04:42.517  "block_size": 512,
00:04:42.517  "num_blocks": 16384,
00:04:42.517  "uuid": "7f1b2714-66c4-5aa9-8fed-d09aa66ac013",
00:04:42.517  "assigned_rate_limits": {
00:04:42.517  "rw_ios_per_sec": 0,
00:04:42.517  "rw_mbytes_per_sec": 0,
00:04:42.517  "r_mbytes_per_sec": 0,
00:04:42.517  "w_mbytes_per_sec": 0
00:04:42.517  },
00:04:42.517  "claimed": false,
00:04:42.517  "zoned": false,
00:04:42.517  "supported_io_types": {
00:04:42.517  "read": true,
00:04:42.517  "write": true,
00:04:42.517  "unmap": true,
00:04:42.517  "flush": true,
00:04:42.517  "reset": true,
00:04:42.517  "nvme_admin": false,
00:04:42.517  "nvme_io": false,
00:04:42.517  "nvme_io_md": false,
00:04:42.517  "write_zeroes": true,
00:04:42.517  "zcopy": true,
00:04:42.517  "get_zone_info": false,
00:04:42.517  "zone_management": false,
00:04:42.517  "zone_append": false,
00:04:42.517  "compare": false,
00:04:42.517  "compare_and_write": false,
00:04:42.517  "abort": true,
00:04:42.517  "seek_hole": false,
00:04:42.517  "seek_data": false,
00:04:42.517  "copy": true,
00:04:42.517  "nvme_iov_md": false
00:04:42.517  },
00:04:42.517  "memory_domains": [
00:04:42.517  {
00:04:42.517  "dma_device_id": "system",
00:04:42.517  "dma_device_type": 1
00:04:42.517  },
00:04:42.517  {
00:04:42.517  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:42.517  "dma_device_type": 2
00:04:42.517  }
00:04:42.517  ],
00:04:42.517  "driver_specific": {
00:04:42.517  "passthru": {
00:04:42.517  "name": "Passthru0",
00:04:42.517  "base_bdev_name": "Malloc0"
00:04:42.517  }
00:04:42.517  }
00:04:42.517  }
00:04:42.517  ]'
00:04:42.517    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:42.517   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:42.517   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.517   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.517    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:42.517    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.517    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.517    07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.517   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:42.517    07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:42.517  ************************************
00:04:42.517  END TEST rpc_integrity
00:04:42.517  ************************************
00:04:42.517   07:48:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:42.517  
00:04:42.517  real	0m0.330s
00:04:42.517  user	0m0.226s
00:04:42.517  sys	0m0.040s
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:42.517   07:48:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:42.776   07:48:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:04:42.776   07:48:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:42.776   07:48:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:42.776   07:48:58 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:42.776  ************************************
00:04:42.776  START TEST rpc_plugins
00:04:42.776  ************************************
00:04:42.776   07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:04:42.776    07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.776   07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:04:42.776    07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.776   07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:04:42.776  {
00:04:42.776  "name": "Malloc1",
00:04:42.776  "aliases": [
00:04:42.776  "55a6d21e-afdd-4c50-bb6b-f31f4076f445"
00:04:42.776  ],
00:04:42.776  "product_name": "Malloc disk",
00:04:42.776  "block_size": 4096,
00:04:42.776  "num_blocks": 256,
00:04:42.776  "uuid": "55a6d21e-afdd-4c50-bb6b-f31f4076f445",
00:04:42.776  "assigned_rate_limits": {
00:04:42.776  "rw_ios_per_sec": 0,
00:04:42.776  "rw_mbytes_per_sec": 0,
00:04:42.776  "r_mbytes_per_sec": 0,
00:04:42.776  "w_mbytes_per_sec": 0
00:04:42.776  },
00:04:42.776  "claimed": false,
00:04:42.776  "zoned": false,
00:04:42.776  "supported_io_types": {
00:04:42.776  "read": true,
00:04:42.776  "write": true,
00:04:42.776  "unmap": true,
00:04:42.776  "flush": true,
00:04:42.776  "reset": true,
00:04:42.776  "nvme_admin": false,
00:04:42.776  "nvme_io": false,
00:04:42.776  "nvme_io_md": false,
00:04:42.776  "write_zeroes": true,
00:04:42.776  "zcopy": true,
00:04:42.776  "get_zone_info": false,
00:04:42.776  "zone_management": false,
00:04:42.776  "zone_append": false,
00:04:42.776  "compare": false,
00:04:42.776  "compare_and_write": false,
00:04:42.776  "abort": true,
00:04:42.776  "seek_hole": false,
00:04:42.776  "seek_data": false,
00:04:42.776  "copy": true,
00:04:42.776  "nvme_iov_md": false
00:04:42.776  },
00:04:42.776  "memory_domains": [
00:04:42.776  {
00:04:42.776  "dma_device_id": "system",
00:04:42.776  "dma_device_type": 1
00:04:42.776  },
00:04:42.776  {
00:04:42.776  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:42.776  "dma_device_type": 2
00:04:42.776  }
00:04:42.776  ],
00:04:42.776  "driver_specific": {}
00:04:42.776  }
00:04:42.776  ]'
00:04:42.776    07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:04:42.776   07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:04:42.776   07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:04:42.776   07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.776   07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:42.776   07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.776    07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:42.776    07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:42.777   07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:04:42.777    07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:04:42.777  ************************************
00:04:42.777  END TEST rpc_plugins
00:04:42.777  ************************************
00:04:42.777   07:48:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:04:42.777  
00:04:42.777  real	0m0.166s
00:04:42.777  user	0m0.110s
00:04:42.777  sys	0m0.020s
00:04:42.777   07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:42.777   07:48:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:42.777   07:48:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:04:42.777   07:48:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:42.777   07:48:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:42.777   07:48:58 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:43.035  ************************************
00:04:43.035  START TEST rpc_trace_cmd_test
00:04:43.035  ************************************
00:04:43.035   07:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:04:43.035   07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.035   07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:04:43.035  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56584",
00:04:43.035  "tpoint_group_mask": "0x8",
00:04:43.035  "iscsi_conn": {
00:04:43.035  "mask": "0x2",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "scsi": {
00:04:43.035  "mask": "0x4",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "bdev": {
00:04:43.035  "mask": "0x8",
00:04:43.035  "tpoint_mask": "0xffffffffffffffff"
00:04:43.035  },
00:04:43.035  "nvmf_rdma": {
00:04:43.035  "mask": "0x10",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "nvmf_tcp": {
00:04:43.035  "mask": "0x20",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "ftl": {
00:04:43.035  "mask": "0x40",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "blobfs": {
00:04:43.035  "mask": "0x80",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "dsa": {
00:04:43.035  "mask": "0x200",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "thread": {
00:04:43.035  "mask": "0x400",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "nvme_pcie": {
00:04:43.035  "mask": "0x800",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "iaa": {
00:04:43.035  "mask": "0x1000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "nvme_tcp": {
00:04:43.035  "mask": "0x2000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "bdev_nvme": {
00:04:43.035  "mask": "0x4000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "sock": {
00:04:43.035  "mask": "0x8000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "blob": {
00:04:43.035  "mask": "0x10000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "bdev_raid": {
00:04:43.035  "mask": "0x20000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  },
00:04:43.035  "scheduler": {
00:04:43.035  "mask": "0x40000",
00:04:43.035  "tpoint_mask": "0x0"
00:04:43.035  }
00:04:43.035  }'
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:04:43.035   07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:04:43.035   07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:04:43.035   07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:04:43.035    07:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:04:43.035   07:48:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:04:43.035    07:48:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:04:43.294  ************************************
00:04:43.294  END TEST rpc_trace_cmd_test
00:04:43.294  ************************************
00:04:43.294   07:48:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:04:43.294  
00:04:43.294  real	0m0.272s
00:04:43.294  user	0m0.237s
00:04:43.294  sys	0m0.025s
00:04:43.294   07:48:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:43.294   07:48:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:43.294   07:48:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:04:43.294   07:48:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:04:43.294   07:48:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:04:43.294   07:48:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:43.294   07:48:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:43.294   07:48:59 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:43.294  ************************************
00:04:43.294  START TEST rpc_daemon_integrity
00:04:43.294  ************************************
00:04:43.294   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:43.294    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:43.294    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.294    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.294    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:43.295  {
00:04:43.295  "name": "Malloc2",
00:04:43.295  "aliases": [
00:04:43.295  "92d0b2db-867a-4118-95d2-60219351dca2"
00:04:43.295  ],
00:04:43.295  "product_name": "Malloc disk",
00:04:43.295  "block_size": 512,
00:04:43.295  "num_blocks": 16384,
00:04:43.295  "uuid": "92d0b2db-867a-4118-95d2-60219351dca2",
00:04:43.295  "assigned_rate_limits": {
00:04:43.295  "rw_ios_per_sec": 0,
00:04:43.295  "rw_mbytes_per_sec": 0,
00:04:43.295  "r_mbytes_per_sec": 0,
00:04:43.295  "w_mbytes_per_sec": 0
00:04:43.295  },
00:04:43.295  "claimed": false,
00:04:43.295  "zoned": false,
00:04:43.295  "supported_io_types": {
00:04:43.295  "read": true,
00:04:43.295  "write": true,
00:04:43.295  "unmap": true,
00:04:43.295  "flush": true,
00:04:43.295  "reset": true,
00:04:43.295  "nvme_admin": false,
00:04:43.295  "nvme_io": false,
00:04:43.295  "nvme_io_md": false,
00:04:43.295  "write_zeroes": true,
00:04:43.295  "zcopy": true,
00:04:43.295  "get_zone_info": false,
00:04:43.295  "zone_management": false,
00:04:43.295  "zone_append": false,
00:04:43.295  "compare": false,
00:04:43.295  "compare_and_write": false,
00:04:43.295  "abort": true,
00:04:43.295  "seek_hole": false,
00:04:43.295  "seek_data": false,
00:04:43.295  "copy": true,
00:04:43.295  "nvme_iov_md": false
00:04:43.295  },
00:04:43.295  "memory_domains": [
00:04:43.295  {
00:04:43.295  "dma_device_id": "system",
00:04:43.295  "dma_device_type": 1
00:04:43.295  },
00:04:43.295  {
00:04:43.295  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:43.295  "dma_device_type": 2
00:04:43.295  }
00:04:43.295  ],
00:04:43.295  "driver_specific": {}
00:04:43.295  }
00:04:43.295  ]'
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.295  [2024-11-20 07:48:59.289616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:04:43.295  [2024-11-20 07:48:59.289677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:43.295  [2024-11-20 07:48:59.289694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec8ce0
00:04:43.295  [2024-11-20 07:48:59.289703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:43.295  [2024-11-20 07:48:59.290634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:43.295  [2024-11-20 07:48:59.290658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:43.295  Passthru0
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.295    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.295   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:43.295  {
00:04:43.295  "name": "Malloc2",
00:04:43.295  "aliases": [
00:04:43.295  "92d0b2db-867a-4118-95d2-60219351dca2"
00:04:43.295  ],
00:04:43.295  "product_name": "Malloc disk",
00:04:43.295  "block_size": 512,
00:04:43.295  "num_blocks": 16384,
00:04:43.295  "uuid": "92d0b2db-867a-4118-95d2-60219351dca2",
00:04:43.295  "assigned_rate_limits": {
00:04:43.295  "rw_ios_per_sec": 0,
00:04:43.295  "rw_mbytes_per_sec": 0,
00:04:43.295  "r_mbytes_per_sec": 0,
00:04:43.295  "w_mbytes_per_sec": 0
00:04:43.295  },
00:04:43.295  "claimed": true,
00:04:43.295  "claim_type": "exclusive_write",
00:04:43.295  "zoned": false,
00:04:43.295  "supported_io_types": {
00:04:43.295  "read": true,
00:04:43.295  "write": true,
00:04:43.295  "unmap": true,
00:04:43.295  "flush": true,
00:04:43.295  "reset": true,
00:04:43.295  "nvme_admin": false,
00:04:43.295  "nvme_io": false,
00:04:43.295  "nvme_io_md": false,
00:04:43.295  "write_zeroes": true,
00:04:43.295  "zcopy": true,
00:04:43.295  "get_zone_info": false,
00:04:43.295  "zone_management": false,
00:04:43.295  "zone_append": false,
00:04:43.295  "compare": false,
00:04:43.295  "compare_and_write": false,
00:04:43.295  "abort": true,
00:04:43.295  "seek_hole": false,
00:04:43.295  "seek_data": false,
00:04:43.295  "copy": true,
00:04:43.295  "nvme_iov_md": false
00:04:43.295  },
00:04:43.295  "memory_domains": [
00:04:43.295  {
00:04:43.295  "dma_device_id": "system",
00:04:43.295  "dma_device_type": 1
00:04:43.295  },
00:04:43.295  {
00:04:43.295  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:43.295  "dma_device_type": 2
00:04:43.295  }
00:04:43.295  ],
00:04:43.295  "driver_specific": {}
00:04:43.295  },
00:04:43.295  {
00:04:43.295  "name": "Passthru0",
00:04:43.295  "aliases": [
00:04:43.295  "58accc27-485c-551d-bc6d-2177f45ff497"
00:04:43.295  ],
00:04:43.295  "product_name": "passthru",
00:04:43.295  "block_size": 512,
00:04:43.295  "num_blocks": 16384,
00:04:43.295  "uuid": "58accc27-485c-551d-bc6d-2177f45ff497",
00:04:43.295  "assigned_rate_limits": {
00:04:43.295  "rw_ios_per_sec": 0,
00:04:43.295  "rw_mbytes_per_sec": 0,
00:04:43.295  "r_mbytes_per_sec": 0,
00:04:43.295  "w_mbytes_per_sec": 0
00:04:43.295  },
00:04:43.295  "claimed": false,
00:04:43.295  "zoned": false,
00:04:43.295  "supported_io_types": {
00:04:43.295  "read": true,
00:04:43.295  "write": true,
00:04:43.295  "unmap": true,
00:04:43.295  "flush": true,
00:04:43.295  "reset": true,
00:04:43.295  "nvme_admin": false,
00:04:43.295  "nvme_io": false,
00:04:43.295  "nvme_io_md": false,
00:04:43.295  "write_zeroes": true,
00:04:43.295  "zcopy": true,
00:04:43.295  "get_zone_info": false,
00:04:43.295  "zone_management": false,
00:04:43.295  "zone_append": false,
00:04:43.295  "compare": false,
00:04:43.295  "compare_and_write": false,
00:04:43.295  "abort": true,
00:04:43.295  "seek_hole": false,
00:04:43.295  "seek_data": false,
00:04:43.295  "copy": true,
00:04:43.295  "nvme_iov_md": false
00:04:43.295  },
00:04:43.295  "memory_domains": [
00:04:43.295  {
00:04:43.295  "dma_device_id": "system",
00:04:43.295  "dma_device_type": 1
00:04:43.295  },
00:04:43.295  {
00:04:43.295  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:43.296  "dma_device_type": 2
00:04:43.296  }
00:04:43.296  ],
00:04:43.296  "driver_specific": {
00:04:43.296  "passthru": {
00:04:43.296  "name": "Passthru0",
00:04:43.296  "base_bdev_name": "Malloc2"
00:04:43.296  }
00:04:43.296  }
00:04:43.296  }
00:04:43.296  ]'
00:04:43.296    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.614    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:43.614    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:43.614    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.614    07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:43.614    07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:43.614  ************************************
00:04:43.614  END TEST rpc_daemon_integrity
00:04:43.614  ************************************
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:43.614  
00:04:43.614  real	0m0.331s
00:04:43.614  user	0m0.225s
00:04:43.614  sys	0m0.044s
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:43.614   07:48:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:43.614   07:48:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:04:43.614   07:48:59 rpc -- rpc/rpc.sh@84 -- # killprocess 56584
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 56584 ']'
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@958 -- # kill -0 56584
00:04:43.614    07:48:59 rpc -- common/autotest_common.sh@959 -- # uname
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:43.614    07:48:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56584
00:04:43.614  killing process with pid 56584
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56584'
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@973 -- # kill 56584
00:04:43.614   07:48:59 rpc -- common/autotest_common.sh@978 -- # wait 56584
00:04:44.181  ************************************
00:04:44.181  END TEST rpc
00:04:44.181  ************************************
00:04:44.181  
00:04:44.181  real	0m2.456s
00:04:44.181  user	0m3.142s
00:04:44.181  sys	0m0.681s
00:04:44.181   07:48:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:44.181   07:48:59 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:44.181   07:48:59  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:04:44.181   07:48:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:44.181   07:48:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:44.181   07:48:59  -- common/autotest_common.sh@10 -- # set +x
00:04:44.181  ************************************
00:04:44.181  START TEST skip_rpc
00:04:44.181  ************************************
00:04:44.181   07:49:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:04:44.181  * Looking for test storage...
00:04:44.181  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:44.181     07:49:00 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:04:44.181     07:49:00 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@345 -- # : 1
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:44.181     07:49:00 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:44.181    07:49:00 skip_rpc -- scripts/common.sh@368 -- # return 0
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:44.181  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.181  		--rc genhtml_branch_coverage=1
00:04:44.181  		--rc genhtml_function_coverage=1
00:04:44.181  		--rc genhtml_legend=1
00:04:44.181  		--rc geninfo_all_blocks=1
00:04:44.181  		--rc geninfo_unexecuted_blocks=1
00:04:44.181  		
00:04:44.181  		'
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:44.181  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.181  		--rc genhtml_branch_coverage=1
00:04:44.181  		--rc genhtml_function_coverage=1
00:04:44.181  		--rc genhtml_legend=1
00:04:44.181  		--rc geninfo_all_blocks=1
00:04:44.181  		--rc geninfo_unexecuted_blocks=1
00:04:44.181  		
00:04:44.181  		'
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:44.181  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.181  		--rc genhtml_branch_coverage=1
00:04:44.181  		--rc genhtml_function_coverage=1
00:04:44.181  		--rc genhtml_legend=1
00:04:44.181  		--rc geninfo_all_blocks=1
00:04:44.181  		--rc geninfo_unexecuted_blocks=1
00:04:44.181  		
00:04:44.181  		'
00:04:44.181    07:49:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:44.181  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:44.181  		--rc genhtml_branch_coverage=1
00:04:44.181  		--rc genhtml_function_coverage=1
00:04:44.181  		--rc genhtml_legend=1
00:04:44.181  		--rc geninfo_all_blocks=1
00:04:44.181  		--rc geninfo_unexecuted_blocks=1
00:04:44.181  		
00:04:44.181  		'
00:04:44.181   07:49:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:44.181   07:49:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:44.181   07:49:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:04:44.181   07:49:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:44.181   07:49:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:44.181   07:49:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:44.181  ************************************
00:04:44.181  START TEST skip_rpc
00:04:44.181  ************************************
00:04:44.181   07:49:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:04:44.181   07:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56782
00:04:44.181   07:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:44.181   07:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:04:44.181   07:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:04:44.439  [2024-11-20 07:49:00.258429] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:04:44.439  [2024-11-20 07:49:00.258781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56782 ]
00:04:44.439  [2024-11-20 07:49:00.394659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:44.439  [2024-11-20 07:49:00.465622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:49.746    07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56782
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56782 ']'
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56782
00:04:49.746    07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:49.746    07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56782
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56782'
00:04:49.746  killing process with pid 56782
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56782
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56782
00:04:49.746  
00:04:49.746  ************************************
00:04:49.746  END TEST skip_rpc
00:04:49.746  ************************************
00:04:49.746  real	0m5.445s
00:04:49.746  user	0m5.055s
00:04:49.746  sys	0m0.303s
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:49.746   07:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:49.746   07:49:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:04:49.746   07:49:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:49.746   07:49:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:49.746   07:49:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:49.746  ************************************
00:04:49.746  START TEST skip_rpc_with_json
00:04:49.746  ************************************
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56869
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56869
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56869 ']'
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:49.746  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:49.746   07:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:49.746  [2024-11-20 07:49:05.748043] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:04:49.746  [2024-11-20 07:49:05.748175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56869 ]
00:04:50.089  [2024-11-20 07:49:05.883741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:50.089  [2024-11-20 07:49:05.956975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:51.046  [2024-11-20 07:49:06.884719] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:04:51.046  request:
00:04:51.046  {
00:04:51.046  "trtype": "tcp",
00:04:51.046  "method": "nvmf_get_transports",
00:04:51.046  "req_id": 1
00:04:51.046  }
00:04:51.046  Got JSON-RPC error response
00:04:51.046  response:
00:04:51.046  {
00:04:51.046  "code": -19,
00:04:51.046  "message": "No such device"
00:04:51.046  }
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:51.046  [2024-11-20 07:49:06.896775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:51.046   07:49:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:51.046   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:51.046   07:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:51.046  {
00:04:51.046  "subsystems": [
00:04:51.046  {
00:04:51.046  "subsystem": "fsdev",
00:04:51.046  "config": [
00:04:51.046  {
00:04:51.046  "method": "fsdev_set_opts",
00:04:51.046  "params": {
00:04:51.046  "fsdev_io_pool_size": 65535,
00:04:51.046  "fsdev_io_cache_size": 256
00:04:51.046  }
00:04:51.046  }
00:04:51.046  ]
00:04:51.046  },
00:04:51.046  {
00:04:51.046  "subsystem": "keyring",
00:04:51.046  "config": []
00:04:51.046  },
00:04:51.046  {
00:04:51.046  "subsystem": "iobuf",
00:04:51.046  "config": [
00:04:51.046  {
00:04:51.046  "method": "iobuf_set_options",
00:04:51.046  "params": {
00:04:51.046  "small_pool_count": 8192,
00:04:51.046  "large_pool_count": 1024,
00:04:51.046  "small_bufsize": 8192,
00:04:51.046  "large_bufsize": 135168,
00:04:51.046  "enable_numa": false
00:04:51.046  }
00:04:51.046  }
00:04:51.046  ]
00:04:51.046  },
00:04:51.046  {
00:04:51.046  "subsystem": "sock",
00:04:51.046  "config": [
00:04:51.046  {
00:04:51.046  "method": "sock_set_default_impl",
00:04:51.046  "params": {
00:04:51.046  "impl_name": "posix"
00:04:51.046  }
00:04:51.046  },
00:04:51.046  {
00:04:51.046  "method": "sock_impl_set_options",
00:04:51.046  "params": {
00:04:51.046  "impl_name": "ssl",
00:04:51.046  "recv_buf_size": 4096,
00:04:51.046  "send_buf_size": 4096,
00:04:51.046  "enable_recv_pipe": true,
00:04:51.046  "enable_quickack": false,
00:04:51.046  "enable_placement_id": 0,
00:04:51.046  "enable_zerocopy_send_server": true,
00:04:51.046  "enable_zerocopy_send_client": false,
00:04:51.046  "zerocopy_threshold": 0,
00:04:51.046  "tls_version": 0,
00:04:51.046  "enable_ktls": false
00:04:51.046  }
00:04:51.046  },
00:04:51.046  {
00:04:51.046  "method": "sock_impl_set_options",
00:04:51.046  "params": {
00:04:51.046  "impl_name": "posix",
00:04:51.046  "recv_buf_size": 2097152,
00:04:51.046  "send_buf_size": 2097152,
00:04:51.046  "enable_recv_pipe": true,
00:04:51.046  "enable_quickack": false,
00:04:51.046  "enable_placement_id": 0,
00:04:51.046  "enable_zerocopy_send_server": true,
00:04:51.047  "enable_zerocopy_send_client": false,
00:04:51.047  "zerocopy_threshold": 0,
00:04:51.047  "tls_version": 0,
00:04:51.047  "enable_ktls": false
00:04:51.047  }
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "vmd",
00:04:51.047  "config": []
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "accel",
00:04:51.047  "config": [
00:04:51.047  {
00:04:51.047  "method": "accel_set_options",
00:04:51.047  "params": {
00:04:51.047  "small_cache_size": 128,
00:04:51.047  "large_cache_size": 16,
00:04:51.047  "task_count": 2048,
00:04:51.047  "sequence_count": 2048,
00:04:51.047  "buf_count": 2048
00:04:51.047  }
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "bdev",
00:04:51.047  "config": [
00:04:51.047  {
00:04:51.047  "method": "bdev_set_options",
00:04:51.047  "params": {
00:04:51.047  "bdev_io_pool_size": 65535,
00:04:51.047  "bdev_io_cache_size": 256,
00:04:51.047  "bdev_auto_examine": true,
00:04:51.047  "iobuf_small_cache_size": 128,
00:04:51.047  "iobuf_large_cache_size": 16
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "bdev_raid_set_options",
00:04:51.047  "params": {
00:04:51.047  "process_window_size_kb": 1024,
00:04:51.047  "process_max_bandwidth_mb_sec": 0
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "bdev_iscsi_set_options",
00:04:51.047  "params": {
00:04:51.047  "timeout_sec": 30
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "bdev_nvme_set_options",
00:04:51.047  "params": {
00:04:51.047  "action_on_timeout": "none",
00:04:51.047  "timeout_us": 0,
00:04:51.047  "timeout_admin_us": 0,
00:04:51.047  "keep_alive_timeout_ms": 10000,
00:04:51.047  "arbitration_burst": 0,
00:04:51.047  "low_priority_weight": 0,
00:04:51.047  "medium_priority_weight": 0,
00:04:51.047  "high_priority_weight": 0,
00:04:51.047  "nvme_adminq_poll_period_us": 10000,
00:04:51.047  "nvme_ioq_poll_period_us": 0,
00:04:51.047  "io_queue_requests": 0,
00:04:51.047  "delay_cmd_submit": true,
00:04:51.047  "transport_retry_count": 4,
00:04:51.047  "bdev_retry_count": 3,
00:04:51.047  "transport_ack_timeout": 0,
00:04:51.047  "ctrlr_loss_timeout_sec": 0,
00:04:51.047  "reconnect_delay_sec": 0,
00:04:51.047  "fast_io_fail_timeout_sec": 0,
00:04:51.047  "disable_auto_failback": false,
00:04:51.047  "generate_uuids": false,
00:04:51.047  "transport_tos": 0,
00:04:51.047  "nvme_error_stat": false,
00:04:51.047  "rdma_srq_size": 0,
00:04:51.047  "io_path_stat": false,
00:04:51.047  "allow_accel_sequence": false,
00:04:51.047  "rdma_max_cq_size": 0,
00:04:51.047  "rdma_cm_event_timeout_ms": 0,
00:04:51.047  "dhchap_digests": [
00:04:51.047  "sha256",
00:04:51.047  "sha384",
00:04:51.047  "sha512"
00:04:51.047  ],
00:04:51.047  "dhchap_dhgroups": [
00:04:51.047  "null",
00:04:51.047  "ffdhe2048",
00:04:51.047  "ffdhe3072",
00:04:51.047  "ffdhe4096",
00:04:51.047  "ffdhe6144",
00:04:51.047  "ffdhe8192"
00:04:51.047  ]
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "bdev_nvme_set_hotplug",
00:04:51.047  "params": {
00:04:51.047  "period_us": 100000,
00:04:51.047  "enable": false
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "bdev_wait_for_examine"
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "scsi",
00:04:51.047  "config": null
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "scheduler",
00:04:51.047  "config": [
00:04:51.047  {
00:04:51.047  "method": "framework_set_scheduler",
00:04:51.047  "params": {
00:04:51.047  "name": "static"
00:04:51.047  }
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "vhost_scsi",
00:04:51.047  "config": []
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "vhost_blk",
00:04:51.047  "config": []
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "ublk",
00:04:51.047  "config": []
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "nbd",
00:04:51.047  "config": []
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "nvmf",
00:04:51.047  "config": [
00:04:51.047  {
00:04:51.047  "method": "nvmf_set_config",
00:04:51.047  "params": {
00:04:51.047  "discovery_filter": "match_any",
00:04:51.047  "admin_cmd_passthru": {
00:04:51.047  "identify_ctrlr": false
00:04:51.047  },
00:04:51.047  "dhchap_digests": [
00:04:51.047  "sha256",
00:04:51.047  "sha384",
00:04:51.047  "sha512"
00:04:51.047  ],
00:04:51.047  "dhchap_dhgroups": [
00:04:51.047  "null",
00:04:51.047  "ffdhe2048",
00:04:51.047  "ffdhe3072",
00:04:51.047  "ffdhe4096",
00:04:51.047  "ffdhe6144",
00:04:51.047  "ffdhe8192"
00:04:51.047  ]
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "nvmf_set_max_subsystems",
00:04:51.047  "params": {
00:04:51.047  "max_subsystems": 1024
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "nvmf_set_crdt",
00:04:51.047  "params": {
00:04:51.047  "crdt1": 0,
00:04:51.047  "crdt2": 0,
00:04:51.047  "crdt3": 0
00:04:51.047  }
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "method": "nvmf_create_transport",
00:04:51.047  "params": {
00:04:51.047  "trtype": "TCP",
00:04:51.047  "max_queue_depth": 128,
00:04:51.047  "max_io_qpairs_per_ctrlr": 127,
00:04:51.047  "in_capsule_data_size": 4096,
00:04:51.047  "max_io_size": 131072,
00:04:51.047  "io_unit_size": 131072,
00:04:51.047  "max_aq_depth": 128,
00:04:51.047  "num_shared_buffers": 511,
00:04:51.047  "buf_cache_size": 4294967295,
00:04:51.047  "dif_insert_or_strip": false,
00:04:51.047  "zcopy": false,
00:04:51.047  "c2h_success": true,
00:04:51.047  "sock_priority": 0,
00:04:51.047  "abort_timeout_sec": 1,
00:04:51.047  "ack_timeout": 0,
00:04:51.047  "data_wr_pool_size": 0
00:04:51.047  }
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  },
00:04:51.047  {
00:04:51.047  "subsystem": "iscsi",
00:04:51.047  "config": [
00:04:51.047  {
00:04:51.047  "method": "iscsi_set_options",
00:04:51.047  "params": {
00:04:51.047  "node_base": "iqn.2016-06.io.spdk",
00:04:51.047  "max_sessions": 128,
00:04:51.047  "max_connections_per_session": 2,
00:04:51.047  "max_queue_depth": 64,
00:04:51.047  "default_time2wait": 2,
00:04:51.047  "default_time2retain": 20,
00:04:51.047  "first_burst_length": 8192,
00:04:51.047  "immediate_data": true,
00:04:51.047  "allow_duplicated_isid": false,
00:04:51.047  "error_recovery_level": 0,
00:04:51.047  "nop_timeout": 60,
00:04:51.047  "nop_in_interval": 30,
00:04:51.047  "disable_chap": false,
00:04:51.047  "require_chap": false,
00:04:51.047  "mutual_chap": false,
00:04:51.047  "chap_group": 0,
00:04:51.047  "max_large_datain_per_connection": 64,
00:04:51.047  "max_r2t_per_connection": 4,
00:04:51.047  "pdu_pool_size": 36864,
00:04:51.047  "immediate_data_pool_size": 16384,
00:04:51.047  "data_out_pool_size": 2048
00:04:51.047  }
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  }
00:04:51.047  ]
00:04:51.047  }
00:04:51.047   07:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:04:51.047   07:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56869
00:04:51.047   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56869 ']'
00:04:51.047   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56869
00:04:51.047    07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:51.047   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:51.306    07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56869
00:04:51.306  killing process with pid 56869
00:04:51.306   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:51.306   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:51.306   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56869'
00:04:51.306   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56869
00:04:51.306   07:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56869
00:04:51.565   07:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56896
00:04:51.565   07:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:04:51.565   07:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56896
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56896 ']'
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56896
00:04:56.833    07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:56.833    07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56896
00:04:56.833  killing process with pid 56896
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56896'
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56896
00:04:56.833   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56896
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:57.092  
00:04:57.092  real	0m7.245s
00:04:57.092  user	0m7.067s
00:04:57.092  sys	0m0.711s
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:57.092  ************************************
00:04:57.092  END TEST skip_rpc_with_json
00:04:57.092  ************************************
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:57.092   07:49:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:04:57.092   07:49:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:57.092   07:49:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:57.092   07:49:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:57.092  ************************************
00:04:57.092  START TEST skip_rpc_with_delay
00:04:57.092  ************************************
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.092    07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.092    07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.092   07:49:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:57.092  [2024-11-20 07:49:13.066344] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:57.092  
00:04:57.092  real	0m0.093s
00:04:57.092  user	0m0.064s
00:04:57.092  sys	0m0.026s
00:04:57.092  ************************************
00:04:57.092  END TEST skip_rpc_with_delay
00:04:57.092  ************************************
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:57.092   07:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:04:57.351    07:49:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:04:57.351   07:49:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:04:57.351   07:49:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:04:57.351   07:49:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:57.352   07:49:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:57.352   07:49:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:57.352  ************************************
00:04:57.352  START TEST exit_on_failed_rpc_init
00:04:57.352  ************************************
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57006
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57006
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57006 ']'
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:57.352  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:57.352   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:57.352  [2024-11-20 07:49:13.199688] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:04:57.352  [2024-11-20 07:49:13.200107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57006 ]
00:04:57.352  [2024-11-20 07:49:13.329440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:57.352  [2024-11-20 07:49:13.388126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.922    07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.922    07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:04:57.922   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:04:57.922  [2024-11-20 07:49:13.721001] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:04:57.922  [2024-11-20 07:49:13.722027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57016 ]
00:04:57.922  [2024-11-20 07:49:13.853031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:57.922  [2024-11-20 07:49:13.924920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:57.922  [2024-11-20 07:49:13.925282] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:04:57.922  [2024-11-20 07:49:13.925504] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:04:57.922  [2024-11-20 07:49:13.925623] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57006
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57006 ']'
00:04:58.180   07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57006
00:04:58.180    07:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:04:58.180   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:58.180    07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57006
00:04:58.180  killing process with pid 57006
00:04:58.180   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:58.180   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:58.180   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57006'
00:04:58.180   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57006
00:04:58.180   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57006
00:04:58.440  
00:04:58.440  real	0m1.279s
00:04:58.440  user	0m1.360s
00:04:58.440  sys	0m0.378s
00:04:58.440   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:58.440   07:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:58.440  ************************************
00:04:58.440  END TEST exit_on_failed_rpc_init
00:04:58.440  ************************************
00:04:58.440   07:49:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:58.699  
00:04:58.699  real	0m14.473s
00:04:58.699  user	0m13.743s
00:04:58.699  sys	0m1.621s
00:04:58.699   07:49:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:58.699   07:49:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:58.699  ************************************
00:04:58.699  END TEST skip_rpc
00:04:58.699  ************************************
00:04:58.699   07:49:14  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:58.699   07:49:14  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:58.699   07:49:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:58.699   07:49:14  -- common/autotest_common.sh@10 -- # set +x
00:04:58.699  ************************************
00:04:58.699  START TEST rpc_client
00:04:58.699  ************************************
00:04:58.699   07:49:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:58.699  * Looking for test storage...
00:04:58.699  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:58.699     07:49:14 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:04:58.699     07:49:14 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@345 -- # : 1
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@353 -- # local d=1
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@355 -- # echo 1
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@353 -- # local d=2
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:58.699     07:49:14 rpc_client -- scripts/common.sh@355 -- # echo 2
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:58.699    07:49:14 rpc_client -- scripts/common.sh@368 -- # return 0
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:58.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.699  		--rc genhtml_branch_coverage=1
00:04:58.699  		--rc genhtml_function_coverage=1
00:04:58.699  		--rc genhtml_legend=1
00:04:58.699  		--rc geninfo_all_blocks=1
00:04:58.699  		--rc geninfo_unexecuted_blocks=1
00:04:58.699  		
00:04:58.699  		'
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:58.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.699  		--rc genhtml_branch_coverage=1
00:04:58.699  		--rc genhtml_function_coverage=1
00:04:58.699  		--rc genhtml_legend=1
00:04:58.699  		--rc geninfo_all_blocks=1
00:04:58.699  		--rc geninfo_unexecuted_blocks=1
00:04:58.699  		
00:04:58.699  		'
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:58.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.699  		--rc genhtml_branch_coverage=1
00:04:58.699  		--rc genhtml_function_coverage=1
00:04:58.699  		--rc genhtml_legend=1
00:04:58.699  		--rc geninfo_all_blocks=1
00:04:58.699  		--rc geninfo_unexecuted_blocks=1
00:04:58.699  		
00:04:58.699  		'
00:04:58.699    07:49:14 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:58.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.699  		--rc genhtml_branch_coverage=1
00:04:58.699  		--rc genhtml_function_coverage=1
00:04:58.699  		--rc genhtml_legend=1
00:04:58.699  		--rc geninfo_all_blocks=1
00:04:58.699  		--rc geninfo_unexecuted_blocks=1
00:04:58.699  		
00:04:58.699  		'
00:04:58.699   07:49:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:04:58.958  OK
00:04:58.958   07:49:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:04:58.958  
00:04:58.958  real	0m0.212s
00:04:58.958  user	0m0.131s
00:04:58.958  sys	0m0.087s
00:04:58.958   07:49:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:58.958   07:49:14 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:04:58.958  ************************************
00:04:58.958  END TEST rpc_client
00:04:58.958  ************************************
00:04:58.958   07:49:14  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:58.958   07:49:14  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:58.958   07:49:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:58.958   07:49:14  -- common/autotest_common.sh@10 -- # set +x
00:04:58.958  ************************************
00:04:58.958  START TEST json_config
00:04:58.958  ************************************
00:04:58.958   07:49:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:58.958    07:49:14 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:58.958     07:49:14 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:04:58.958     07:49:14 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:59.218    07:49:14 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:59.218    07:49:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:59.218    07:49:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:59.218    07:49:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:59.218    07:49:14 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:04:59.218    07:49:14 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:04:59.218    07:49:14 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:04:59.218    07:49:14 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:04:59.218    07:49:14 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:04:59.218    07:49:14 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:04:59.218    07:49:14 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:04:59.218    07:49:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:59.218    07:49:14 json_config -- scripts/common.sh@344 -- # case "$op" in
00:04:59.218    07:49:14 json_config -- scripts/common.sh@345 -- # : 1
00:04:59.218    07:49:14 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:59.218    07:49:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:59.218     07:49:14 json_config -- scripts/common.sh@365 -- # decimal 1
00:04:59.218     07:49:14 json_config -- scripts/common.sh@353 -- # local d=1
00:04:59.218     07:49:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:59.218     07:49:14 json_config -- scripts/common.sh@355 -- # echo 1
00:04:59.218    07:49:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:04:59.218     07:49:15 json_config -- scripts/common.sh@366 -- # decimal 2
00:04:59.218     07:49:15 json_config -- scripts/common.sh@353 -- # local d=2
00:04:59.218     07:49:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:59.218     07:49:15 json_config -- scripts/common.sh@355 -- # echo 2
00:04:59.218    07:49:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:04:59.218    07:49:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:59.218    07:49:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:59.218    07:49:15 json_config -- scripts/common.sh@368 -- # return 0
00:04:59.218    07:49:15 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:59.218    07:49:15 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:59.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.218  		--rc genhtml_branch_coverage=1
00:04:59.218  		--rc genhtml_function_coverage=1
00:04:59.218  		--rc genhtml_legend=1
00:04:59.218  		--rc geninfo_all_blocks=1
00:04:59.218  		--rc geninfo_unexecuted_blocks=1
00:04:59.218  		
00:04:59.218  		'
00:04:59.218    07:49:15 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:59.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.218  		--rc genhtml_branch_coverage=1
00:04:59.218  		--rc genhtml_function_coverage=1
00:04:59.218  		--rc genhtml_legend=1
00:04:59.218  		--rc geninfo_all_blocks=1
00:04:59.218  		--rc geninfo_unexecuted_blocks=1
00:04:59.218  		
00:04:59.218  		'
00:04:59.218    07:49:15 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:59.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.218  		--rc genhtml_branch_coverage=1
00:04:59.218  		--rc genhtml_function_coverage=1
00:04:59.218  		--rc genhtml_legend=1
00:04:59.218  		--rc geninfo_all_blocks=1
00:04:59.218  		--rc geninfo_unexecuted_blocks=1
00:04:59.218  		
00:04:59.218  		'
00:04:59.218    07:49:15 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:59.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.218  		--rc genhtml_branch_coverage=1
00:04:59.218  		--rc genhtml_function_coverage=1
00:04:59.218  		--rc genhtml_legend=1
00:04:59.218  		--rc geninfo_all_blocks=1
00:04:59.218  		--rc geninfo_unexecuted_blocks=1
00:04:59.218  		
00:04:59.218  		'
00:04:59.218   07:49:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:59.218     07:49:15 json_config -- nvmf/common.sh@7 -- # uname -s
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS=
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:59.218     07:49:15 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5ba723b5-059c-4de2-baf9-14c571c75cf0
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=5ba723b5-059c-4de2-baf9-14c571c75cf0
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect'
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:59.218    07:49:15 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:59.218     07:49:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:04:59.218     07:49:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:59.218     07:49:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:59.218     07:49:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:59.218      07:49:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.218      07:49:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.218      07:49:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.219      07:49:15 json_config -- paths/export.sh@5 -- # export PATH
00:04:59.219      07:49:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh
00:04:59.219     07:49:15 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br
00:04:59.219     07:49:15 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk
00:04:59.219     07:49:15 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=()
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@50 -- # : 0
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']'
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']'
00:04:59.219  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']'
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']'
00:04:59.219    07:49:15 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:04:59.219  WARNING: No tests are enabled so not running JSON configuration tests
00:04:59.219   07:49:15 json_config -- json_config/json_config.sh@28 -- # exit 0
00:04:59.219  ************************************
00:04:59.219  END TEST json_config
00:04:59.219  ************************************
00:04:59.219  
00:04:59.219  real	0m0.241s
00:04:59.219  user	0m0.162s
00:04:59.219  sys	0m0.077s
00:04:59.219   07:49:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:59.219   07:49:15 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:59.219   07:49:15  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:59.219   07:49:15  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:59.219   07:49:15  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:59.219   07:49:15  -- common/autotest_common.sh@10 -- # set +x
00:04:59.219  ************************************
00:04:59.219  START TEST json_config_extra_key
00:04:59.219  ************************************
00:04:59.219   07:49:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:59.219    07:49:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:59.219     07:49:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:04:59.219     07:49:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:59.479    07:49:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:59.479    07:49:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:04:59.479    07:49:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:59.479    07:49:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:59.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.479  		--rc genhtml_branch_coverage=1
00:04:59.479  		--rc genhtml_function_coverage=1
00:04:59.479  		--rc genhtml_legend=1
00:04:59.479  		--rc geninfo_all_blocks=1
00:04:59.479  		--rc geninfo_unexecuted_blocks=1
00:04:59.479  		
00:04:59.479  		'
00:04:59.479    07:49:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:59.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.479  		--rc genhtml_branch_coverage=1
00:04:59.479  		--rc genhtml_function_coverage=1
00:04:59.479  		--rc genhtml_legend=1
00:04:59.479  		--rc geninfo_all_blocks=1
00:04:59.479  		--rc geninfo_unexecuted_blocks=1
00:04:59.479  		
00:04:59.479  		'
00:04:59.479    07:49:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:59.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.479  		--rc genhtml_branch_coverage=1
00:04:59.479  		--rc genhtml_function_coverage=1
00:04:59.479  		--rc genhtml_legend=1
00:04:59.479  		--rc geninfo_all_blocks=1
00:04:59.479  		--rc geninfo_unexecuted_blocks=1
00:04:59.479  		
00:04:59.479  		'
00:04:59.479    07:49:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:59.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.479  		--rc genhtml_branch_coverage=1
00:04:59.479  		--rc genhtml_function_coverage=1
00:04:59.479  		--rc genhtml_legend=1
00:04:59.479  		--rc geninfo_all_blocks=1
00:04:59.479  		--rc geninfo_unexecuted_blocks=1
00:04:59.479  		
00:04:59.479  		'
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:59.479     07:49:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS=
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:59.479     07:49:15 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5ba723b5-059c-4de2-baf9-14c571c75cf0
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=5ba723b5-059c-4de2-baf9-14c571c75cf0
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect'
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:59.479     07:49:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:59.479      07:49:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.479      07:49:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.479      07:49:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.479      07:49:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:04:59.479      07:49:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh
00:04:59.479     07:49:15 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br
00:04:59.479     07:49:15 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk
00:04:59.479     07:49:15 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=()
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@50 -- # : 0
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']'
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']'
00:04:59.479  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']'
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']'
00:04:59.479    07:49:15 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:04:59.479   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:59.480   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:04:59.480  INFO: launching applications...
00:04:59.480   07:49:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:59.480  Waiting for target to run...
00:04:59.480  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57196
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57196 /var/tmp/spdk_tgt.sock
00:04:59.480   07:49:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57196 ']'
00:04:59.480   07:49:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:59.480   07:49:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:59.480   07:49:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:59.480   07:49:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:59.480   07:49:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:59.480   07:49:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:59.480  [2024-11-20 07:49:15.365491] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:04:59.480  [2024-11-20 07:49:15.365827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57196 ]
00:05:00.047  [2024-11-20 07:49:15.798932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:00.047  [2024-11-20 07:49:15.868844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:00.614   07:49:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:00.614   07:49:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:05:00.614  
00:05:00.614  INFO: shutting down applications...
00:05:00.614   07:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:05:00.614   07:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57196 ]]
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57196
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57196
00:05:00.614   07:49:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57196
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@43 -- # break
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:05:01.182   07:49:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:05:01.182  SPDK target shutdown done
00:05:01.182  Success
00:05:01.182   07:49:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:05:01.182  
00:05:01.182  real	0m1.847s
00:05:01.182  user	0m1.755s
00:05:01.182  sys	0m0.488s
00:05:01.182  ************************************
00:05:01.182  END TEST json_config_extra_key
00:05:01.182  ************************************
00:05:01.182   07:49:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:01.182   07:49:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:01.182   07:49:16  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:01.182   07:49:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:01.182   07:49:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:01.182   07:49:16  -- common/autotest_common.sh@10 -- # set +x
00:05:01.182  ************************************
00:05:01.182  START TEST alias_rpc
00:05:01.182  ************************************
00:05:01.182   07:49:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:01.182  * Looking for test storage...
00:05:01.182  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:01.182     07:49:17 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:05:01.182     07:49:17 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@345 -- # : 1
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:01.182     07:49:17 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:01.182    07:49:17 alias_rpc -- scripts/common.sh@368 -- # return 0
00:05:01.182  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:01.182  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.182  		--rc genhtml_branch_coverage=1
00:05:01.182  		--rc genhtml_function_coverage=1
00:05:01.182  		--rc genhtml_legend=1
00:05:01.182  		--rc geninfo_all_blocks=1
00:05:01.182  		--rc geninfo_unexecuted_blocks=1
00:05:01.182  		
00:05:01.182  		'
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:01.182  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.182  		--rc genhtml_branch_coverage=1
00:05:01.182  		--rc genhtml_function_coverage=1
00:05:01.182  		--rc genhtml_legend=1
00:05:01.182  		--rc geninfo_all_blocks=1
00:05:01.182  		--rc geninfo_unexecuted_blocks=1
00:05:01.182  		
00:05:01.182  		'
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:01.182  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.182  		--rc genhtml_branch_coverage=1
00:05:01.182  		--rc genhtml_function_coverage=1
00:05:01.182  		--rc genhtml_legend=1
00:05:01.182  		--rc geninfo_all_blocks=1
00:05:01.182  		--rc geninfo_unexecuted_blocks=1
00:05:01.182  		
00:05:01.182  		'
00:05:01.182    07:49:17 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:01.182  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.182  		--rc genhtml_branch_coverage=1
00:05:01.182  		--rc genhtml_function_coverage=1
00:05:01.182  		--rc genhtml_legend=1
00:05:01.182  		--rc geninfo_all_blocks=1
00:05:01.182  		--rc geninfo_unexecuted_blocks=1
00:05:01.182  		
00:05:01.182  		'
00:05:01.182   07:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:05:01.182   07:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57274
00:05:01.182   07:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57274
00:05:01.182   07:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:05:01.182   07:49:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57274 ']'
00:05:01.182   07:49:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:01.182   07:49:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:01.182   07:49:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:01.182   07:49:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:01.182   07:49:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:01.441  [2024-11-20 07:49:17.257506] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:01.441  [2024-11-20 07:49:17.257859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57274 ]
00:05:01.441  [2024-11-20 07:49:17.387014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:01.441  [2024-11-20 07:49:17.474943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:02.009   07:49:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:02.009   07:49:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:02.009   07:49:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:05:02.267   07:49:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57274
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57274 ']'
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57274
00:05:02.267    07:49:18 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:02.267    07:49:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57274
00:05:02.267  killing process with pid 57274
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57274'
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 57274
00:05:02.267   07:49:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 57274
00:05:02.527  ************************************
00:05:02.527  END TEST alias_rpc
00:05:02.527  ************************************
00:05:02.527  
00:05:02.527  real	0m1.520s
00:05:02.527  user	0m1.614s
00:05:02.527  sys	0m0.439s
00:05:02.527   07:49:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:02.527   07:49:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:02.527   07:49:18  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:05:02.527   07:49:18  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:05:02.527   07:49:18  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:02.527   07:49:18  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:02.527   07:49:18  -- common/autotest_common.sh@10 -- # set +x
00:05:02.786  ************************************
00:05:02.786  START TEST spdkcli_tcp
00:05:02.786  ************************************
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:05:02.786  * Looking for test storage...
00:05:02.786  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:02.786     07:49:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:05:02.786     07:49:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:02.786     07:49:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:02.786    07:49:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:02.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.786  		--rc genhtml_branch_coverage=1
00:05:02.786  		--rc genhtml_function_coverage=1
00:05:02.786  		--rc genhtml_legend=1
00:05:02.786  		--rc geninfo_all_blocks=1
00:05:02.786  		--rc geninfo_unexecuted_blocks=1
00:05:02.786  		
00:05:02.786  		'
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:02.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.786  		--rc genhtml_branch_coverage=1
00:05:02.786  		--rc genhtml_function_coverage=1
00:05:02.786  		--rc genhtml_legend=1
00:05:02.786  		--rc geninfo_all_blocks=1
00:05:02.786  		--rc geninfo_unexecuted_blocks=1
00:05:02.786  		
00:05:02.786  		'
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:02.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.786  		--rc genhtml_branch_coverage=1
00:05:02.786  		--rc genhtml_function_coverage=1
00:05:02.786  		--rc genhtml_legend=1
00:05:02.786  		--rc geninfo_all_blocks=1
00:05:02.786  		--rc geninfo_unexecuted_blocks=1
00:05:02.786  		
00:05:02.786  		'
00:05:02.786    07:49:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:02.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:02.786  		--rc genhtml_branch_coverage=1
00:05:02.786  		--rc genhtml_function_coverage=1
00:05:02.786  		--rc genhtml_legend=1
00:05:02.786  		--rc geninfo_all_blocks=1
00:05:02.786  		--rc geninfo_unexecuted_blocks=1
00:05:02.786  		
00:05:02.786  		'
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:05:02.786    07:49:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:05:02.786    07:49:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57350
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57350
00:05:02.786   07:49:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57350 ']'
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:02.786  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:02.786   07:49:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:02.786  [2024-11-20 07:49:18.820889] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:02.786  [2024-11-20 07:49:18.821214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57350 ]
00:05:03.045  [2024-11-20 07:49:18.958202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:03.045  [2024-11-20 07:49:19.023482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:03.045  [2024-11-20 07:49:19.023492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:03.307   07:49:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:03.307   07:49:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:05:03.307   07:49:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57360
00:05:03.307   07:49:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:05:03.307   07:49:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:05:03.565  [
00:05:03.565    "bdev_malloc_delete",
00:05:03.565    "bdev_malloc_create",
00:05:03.565    "bdev_null_resize",
00:05:03.565    "bdev_null_delete",
00:05:03.565    "bdev_null_create",
00:05:03.565    "bdev_nvme_cuse_unregister",
00:05:03.565    "bdev_nvme_cuse_register",
00:05:03.565    "bdev_opal_new_user",
00:05:03.565    "bdev_opal_set_lock_state",
00:05:03.565    "bdev_opal_delete",
00:05:03.565    "bdev_opal_get_info",
00:05:03.565    "bdev_opal_create",
00:05:03.565    "bdev_nvme_opal_revert",
00:05:03.565    "bdev_nvme_opal_init",
00:05:03.565    "bdev_nvme_send_cmd",
00:05:03.565    "bdev_nvme_set_keys",
00:05:03.565    "bdev_nvme_get_path_iostat",
00:05:03.565    "bdev_nvme_get_mdns_discovery_info",
00:05:03.565    "bdev_nvme_stop_mdns_discovery",
00:05:03.565    "bdev_nvme_start_mdns_discovery",
00:05:03.565    "bdev_nvme_set_multipath_policy",
00:05:03.565    "bdev_nvme_set_preferred_path",
00:05:03.565    "bdev_nvme_get_io_paths",
00:05:03.565    "bdev_nvme_remove_error_injection",
00:05:03.565    "bdev_nvme_add_error_injection",
00:05:03.565    "bdev_nvme_get_discovery_info",
00:05:03.565    "bdev_nvme_stop_discovery",
00:05:03.565    "bdev_nvme_start_discovery",
00:05:03.565    "bdev_nvme_get_controller_health_info",
00:05:03.565    "bdev_nvme_disable_controller",
00:05:03.565    "bdev_nvme_enable_controller",
00:05:03.565    "bdev_nvme_reset_controller",
00:05:03.565    "bdev_nvme_get_transport_statistics",
00:05:03.565    "bdev_nvme_apply_firmware",
00:05:03.565    "bdev_nvme_detach_controller",
00:05:03.565    "bdev_nvme_get_controllers",
00:05:03.565    "bdev_nvme_attach_controller",
00:05:03.565    "bdev_nvme_set_hotplug",
00:05:03.565    "bdev_nvme_set_options",
00:05:03.565    "bdev_passthru_delete",
00:05:03.565    "bdev_passthru_create",
00:05:03.565    "bdev_lvol_set_parent_bdev",
00:05:03.565    "bdev_lvol_set_parent",
00:05:03.565    "bdev_lvol_check_shallow_copy",
00:05:03.565    "bdev_lvol_start_shallow_copy",
00:05:03.565    "bdev_lvol_grow_lvstore",
00:05:03.565    "bdev_lvol_get_lvols",
00:05:03.565    "bdev_lvol_get_lvstores",
00:05:03.565    "bdev_lvol_delete",
00:05:03.565    "bdev_lvol_set_read_only",
00:05:03.565    "bdev_lvol_resize",
00:05:03.565    "bdev_lvol_decouple_parent",
00:05:03.565    "bdev_lvol_inflate",
00:05:03.565    "bdev_lvol_rename",
00:05:03.565    "bdev_lvol_clone_bdev",
00:05:03.565    "bdev_lvol_clone",
00:05:03.565    "bdev_lvol_snapshot",
00:05:03.565    "bdev_lvol_create",
00:05:03.565    "bdev_lvol_delete_lvstore",
00:05:03.565    "bdev_lvol_rename_lvstore",
00:05:03.565    "bdev_lvol_create_lvstore",
00:05:03.565    "bdev_raid_set_options",
00:05:03.565    "bdev_raid_remove_base_bdev",
00:05:03.565    "bdev_raid_add_base_bdev",
00:05:03.565    "bdev_raid_delete",
00:05:03.565    "bdev_raid_create",
00:05:03.565    "bdev_raid_get_bdevs",
00:05:03.565    "bdev_error_inject_error",
00:05:03.565    "bdev_error_delete",
00:05:03.565    "bdev_error_create",
00:05:03.565    "bdev_split_delete",
00:05:03.565    "bdev_split_create",
00:05:03.565    "bdev_delay_delete",
00:05:03.565    "bdev_delay_create",
00:05:03.565    "bdev_delay_update_latency",
00:05:03.565    "bdev_zone_block_delete",
00:05:03.565    "bdev_zone_block_create",
00:05:03.565    "blobfs_create",
00:05:03.565    "blobfs_detect",
00:05:03.565    "blobfs_set_cache_size",
00:05:03.565    "bdev_aio_delete",
00:05:03.565    "bdev_aio_rescan",
00:05:03.565    "bdev_aio_create",
00:05:03.565    "bdev_ftl_set_property",
00:05:03.565    "bdev_ftl_get_properties",
00:05:03.565    "bdev_ftl_get_stats",
00:05:03.565    "bdev_ftl_unmap",
00:05:03.565    "bdev_ftl_unload",
00:05:03.565    "bdev_ftl_delete",
00:05:03.565    "bdev_ftl_load",
00:05:03.565    "bdev_ftl_create",
00:05:03.565    "bdev_virtio_attach_controller",
00:05:03.565    "bdev_virtio_scsi_get_devices",
00:05:03.565    "bdev_virtio_detach_controller",
00:05:03.565    "bdev_virtio_blk_set_hotplug",
00:05:03.565    "bdev_iscsi_delete",
00:05:03.565    "bdev_iscsi_create",
00:05:03.565    "bdev_iscsi_set_options",
00:05:03.565    "accel_error_inject_error",
00:05:03.565    "ioat_scan_accel_module",
00:05:03.565    "dsa_scan_accel_module",
00:05:03.565    "iaa_scan_accel_module",
00:05:03.565    "keyring_file_remove_key",
00:05:03.565    "keyring_file_add_key",
00:05:03.565    "keyring_linux_set_options",
00:05:03.565    "fsdev_aio_delete",
00:05:03.565    "fsdev_aio_create",
00:05:03.565    "iscsi_get_histogram",
00:05:03.565    "iscsi_enable_histogram",
00:05:03.565    "iscsi_set_options",
00:05:03.565    "iscsi_get_auth_groups",
00:05:03.565    "iscsi_auth_group_remove_secret",
00:05:03.565    "iscsi_auth_group_add_secret",
00:05:03.565    "iscsi_delete_auth_group",
00:05:03.565    "iscsi_create_auth_group",
00:05:03.565    "iscsi_set_discovery_auth",
00:05:03.565    "iscsi_get_options",
00:05:03.565    "iscsi_target_node_request_logout",
00:05:03.565    "iscsi_target_node_set_redirect",
00:05:03.565    "iscsi_target_node_set_auth",
00:05:03.565    "iscsi_target_node_add_lun",
00:05:03.565    "iscsi_get_stats",
00:05:03.565    "iscsi_get_connections",
00:05:03.565    "iscsi_portal_group_set_auth",
00:05:03.565    "iscsi_start_portal_group",
00:05:03.565    "iscsi_delete_portal_group",
00:05:03.565    "iscsi_create_portal_group",
00:05:03.565    "iscsi_get_portal_groups",
00:05:03.565    "iscsi_delete_target_node",
00:05:03.565    "iscsi_target_node_remove_pg_ig_maps",
00:05:03.565    "iscsi_target_node_add_pg_ig_maps",
00:05:03.565    "iscsi_create_target_node",
00:05:03.565    "iscsi_get_target_nodes",
00:05:03.565    "iscsi_delete_initiator_group",
00:05:03.565    "iscsi_initiator_group_remove_initiators",
00:05:03.565    "iscsi_initiator_group_add_initiators",
00:05:03.565    "iscsi_create_initiator_group",
00:05:03.565    "iscsi_get_initiator_groups",
00:05:03.565    "nvmf_set_crdt",
00:05:03.565    "nvmf_set_config",
00:05:03.565    "nvmf_set_max_subsystems",
00:05:03.565    "nvmf_stop_mdns_prr",
00:05:03.565    "nvmf_publish_mdns_prr",
00:05:03.565    "nvmf_subsystem_get_listeners",
00:05:03.565    "nvmf_subsystem_get_qpairs",
00:05:03.565    "nvmf_subsystem_get_controllers",
00:05:03.565    "nvmf_get_stats",
00:05:03.565    "nvmf_get_transports",
00:05:03.565    "nvmf_create_transport",
00:05:03.565    "nvmf_get_targets",
00:05:03.565    "nvmf_delete_target",
00:05:03.565    "nvmf_create_target",
00:05:03.565    "nvmf_subsystem_allow_any_host",
00:05:03.565    "nvmf_subsystem_set_keys",
00:05:03.565    "nvmf_subsystem_remove_host",
00:05:03.565    "nvmf_subsystem_add_host",
00:05:03.565    "nvmf_ns_remove_host",
00:05:03.565    "nvmf_ns_add_host",
00:05:03.565    "nvmf_subsystem_remove_ns",
00:05:03.565    "nvmf_subsystem_set_ns_ana_group",
00:05:03.565    "nvmf_subsystem_add_ns",
00:05:03.565    "nvmf_subsystem_listener_set_ana_state",
00:05:03.565    "nvmf_discovery_get_referrals",
00:05:03.565    "nvmf_discovery_remove_referral",
00:05:03.565    "nvmf_discovery_add_referral",
00:05:03.565    "nvmf_subsystem_remove_listener",
00:05:03.565    "nvmf_subsystem_add_listener",
00:05:03.565    "nvmf_delete_subsystem",
00:05:03.565    "nvmf_create_subsystem",
00:05:03.565    "nvmf_get_subsystems",
00:05:03.565    "env_dpdk_get_mem_stats",
00:05:03.565    "nbd_get_disks",
00:05:03.565    "nbd_stop_disk",
00:05:03.565    "nbd_start_disk",
00:05:03.565    "ublk_recover_disk",
00:05:03.565    "ublk_get_disks",
00:05:03.565    "ublk_stop_disk",
00:05:03.565    "ublk_start_disk",
00:05:03.565    "ublk_destroy_target",
00:05:03.565    "ublk_create_target",
00:05:03.565    "virtio_blk_create_transport",
00:05:03.565    "virtio_blk_get_transports",
00:05:03.565    "vhost_controller_set_coalescing",
00:05:03.565    "vhost_get_controllers",
00:05:03.565    "vhost_delete_controller",
00:05:03.565    "vhost_create_blk_controller",
00:05:03.565    "vhost_scsi_controller_remove_target",
00:05:03.566    "vhost_scsi_controller_add_target",
00:05:03.566    "vhost_start_scsi_controller",
00:05:03.566    "vhost_create_scsi_controller",
00:05:03.566    "thread_set_cpumask",
00:05:03.566    "scheduler_set_options",
00:05:03.566    "framework_get_governor",
00:05:03.566    "framework_get_scheduler",
00:05:03.566    "framework_set_scheduler",
00:05:03.566    "framework_get_reactors",
00:05:03.566    "thread_get_io_channels",
00:05:03.566    "thread_get_pollers",
00:05:03.566    "thread_get_stats",
00:05:03.566    "framework_monitor_context_switch",
00:05:03.566    "spdk_kill_instance",
00:05:03.566    "log_enable_timestamps",
00:05:03.566    "log_get_flags",
00:05:03.566    "log_clear_flag",
00:05:03.566    "log_set_flag",
00:05:03.566    "log_get_level",
00:05:03.566    "log_set_level",
00:05:03.566    "log_get_print_level",
00:05:03.566    "log_set_print_level",
00:05:03.566    "framework_enable_cpumask_locks",
00:05:03.566    "framework_disable_cpumask_locks",
00:05:03.566    "framework_wait_init",
00:05:03.566    "framework_start_init",
00:05:03.566    "scsi_get_devices",
00:05:03.566    "bdev_get_histogram",
00:05:03.566    "bdev_enable_histogram",
00:05:03.566    "bdev_set_qos_limit",
00:05:03.566    "bdev_set_qd_sampling_period",
00:05:03.566    "bdev_get_bdevs",
00:05:03.566    "bdev_reset_iostat",
00:05:03.566    "bdev_get_iostat",
00:05:03.566    "bdev_examine",
00:05:03.566    "bdev_wait_for_examine",
00:05:03.566    "bdev_set_options",
00:05:03.566    "accel_get_stats",
00:05:03.566    "accel_set_options",
00:05:03.566    "accel_set_driver",
00:05:03.566    "accel_crypto_key_destroy",
00:05:03.566    "accel_crypto_keys_get",
00:05:03.566    "accel_crypto_key_create",
00:05:03.566    "accel_assign_opc",
00:05:03.566    "accel_get_module_info",
00:05:03.566    "accel_get_opc_assignments",
00:05:03.566    "vmd_rescan",
00:05:03.566    "vmd_remove_device",
00:05:03.566    "vmd_enable",
00:05:03.566    "sock_get_default_impl",
00:05:03.566    "sock_set_default_impl",
00:05:03.566    "sock_impl_set_options",
00:05:03.566    "sock_impl_get_options",
00:05:03.566    "iobuf_get_stats",
00:05:03.566    "iobuf_set_options",
00:05:03.566    "keyring_get_keys",
00:05:03.566    "framework_get_pci_devices",
00:05:03.566    "framework_get_config",
00:05:03.566    "framework_get_subsystems",
00:05:03.566    "fsdev_set_opts",
00:05:03.566    "fsdev_get_opts",
00:05:03.566    "trace_get_info",
00:05:03.566    "trace_get_tpoint_group_mask",
00:05:03.566    "trace_disable_tpoint_group",
00:05:03.566    "trace_enable_tpoint_group",
00:05:03.566    "trace_clear_tpoint_mask",
00:05:03.566    "trace_set_tpoint_mask",
00:05:03.566    "notify_get_notifications",
00:05:03.566    "notify_get_types",
00:05:03.566    "spdk_get_version",
00:05:03.566    "rpc_get_methods"
00:05:03.566  ]
00:05:03.566   07:49:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:03.825   07:49:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:05:03.825   07:49:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57350
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57350 ']'
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57350
00:05:03.825    07:49:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:03.825    07:49:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57350
00:05:03.825  killing process with pid 57350
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57350'
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57350
00:05:03.825   07:49:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57350
00:05:04.084  ************************************
00:05:04.084  END TEST spdkcli_tcp
00:05:04.084  ************************************
00:05:04.084  
00:05:04.084  real	0m1.474s
00:05:04.084  user	0m2.519s
00:05:04.084  sys	0m0.464s
00:05:04.084   07:49:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:04.084   07:49:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:04.084   07:49:20  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:04.084   07:49:20  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:04.084   07:49:20  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:04.084   07:49:20  -- common/autotest_common.sh@10 -- # set +x
00:05:04.084  ************************************
00:05:04.084  START TEST dpdk_mem_utility
00:05:04.084  ************************************
00:05:04.084   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:04.343  * Looking for test storage...
00:05:04.343  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:05:04.343    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:04.343     07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:04.343     07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:05:04.343    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:04.343     07:49:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:05:04.343    07:49:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:04.344    07:49:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:04.344    07:49:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:05:04.344    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:04.344    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:04.344  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.344  		--rc genhtml_branch_coverage=1
00:05:04.344  		--rc genhtml_function_coverage=1
00:05:04.344  		--rc genhtml_legend=1
00:05:04.344  		--rc geninfo_all_blocks=1
00:05:04.344  		--rc geninfo_unexecuted_blocks=1
00:05:04.344  		
00:05:04.344  		'
00:05:04.344    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:04.344  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.344  		--rc genhtml_branch_coverage=1
00:05:04.344  		--rc genhtml_function_coverage=1
00:05:04.344  		--rc genhtml_legend=1
00:05:04.344  		--rc geninfo_all_blocks=1
00:05:04.344  		--rc geninfo_unexecuted_blocks=1
00:05:04.344  		
00:05:04.344  		'
00:05:04.344    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:04.344  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.344  		--rc genhtml_branch_coverage=1
00:05:04.344  		--rc genhtml_function_coverage=1
00:05:04.344  		--rc genhtml_legend=1
00:05:04.344  		--rc geninfo_all_blocks=1
00:05:04.344  		--rc geninfo_unexecuted_blocks=1
00:05:04.344  		
00:05:04.344  		'
00:05:04.344    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:04.344  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.344  		--rc genhtml_branch_coverage=1
00:05:04.344  		--rc genhtml_function_coverage=1
00:05:04.344  		--rc genhtml_legend=1
00:05:04.344  		--rc geninfo_all_blocks=1
00:05:04.344  		--rc geninfo_unexecuted_blocks=1
00:05:04.344  		
00:05:04.344  		'
00:05:04.344   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:05:04.344   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57442
00:05:04.344   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:05:04.344   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57442
00:05:04.344   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57442 ']'
00:05:04.344   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:04.344   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:04.344   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:04.344  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:04.344   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:04.344   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:04.344  [2024-11-20 07:49:20.338498] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:04.344  [2024-11-20 07:49:20.338909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57442 ]
00:05:04.602  [2024-11-20 07:49:20.468794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:04.602  [2024-11-20 07:49:20.532666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:04.861   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:04.861   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:05:04.861   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:05:04.861   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:05:04.861   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:04.861   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:04.861  {
00:05:04.861  "filename": "/tmp/spdk_mem_dump.txt"
00:05:04.861  }
00:05:04.861   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:04.861   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:05:04.861  DPDK memory size 810.000000 MiB in 1 heap(s)
00:05:04.861  1 heaps totaling size 810.000000 MiB
00:05:04.861    size:  810.000000 MiB heap id: 0
00:05:04.861  end heaps----------
00:05:04.861  9 mempools totaling size 595.772034 MiB
00:05:04.861    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:05:04.861    size:  158.602051 MiB name: PDU_data_out_Pool
00:05:04.861    size:   92.545471 MiB name: bdev_io_57442
00:05:04.861    size:   50.003479 MiB name: msgpool_57442
00:05:04.861    size:   36.509338 MiB name: fsdev_io_57442
00:05:04.861    size:   21.763794 MiB name: PDU_Pool
00:05:04.861    size:   19.513306 MiB name: SCSI_TASK_Pool
00:05:04.861    size:    4.133484 MiB name: evtpool_57442
00:05:04.861    size:    0.026123 MiB name: Session_Pool
00:05:04.861  end mempools-------
00:05:04.861  6 memzones totaling size 4.142822 MiB
00:05:04.861    size:    1.000366 MiB name: RG_ring_0_57442
00:05:04.861    size:    1.000366 MiB name: RG_ring_1_57442
00:05:04.861    size:    1.000366 MiB name: RG_ring_4_57442
00:05:04.861    size:    1.000366 MiB name: RG_ring_5_57442
00:05:04.861    size:    0.125366 MiB name: RG_ring_2_57442
00:05:04.861    size:    0.015991 MiB name: RG_ring_3_57442
00:05:04.861  end memzones-------
00:05:04.861   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:05:05.122  heap id: 0 total size: 810.000000 MiB number of busy elements: 311 number of free elements: 15
00:05:05.122    list of free elements. size: 10.813599 MiB
00:05:05.122      element at address: 0x200018a00000 with size:    0.999878 MiB
00:05:05.122      element at address: 0x200018c00000 with size:    0.999878 MiB
00:05:05.122      element at address: 0x200031800000 with size:    0.994446 MiB
00:05:05.122      element at address: 0x200000400000 with size:    0.993958 MiB
00:05:05.122      element at address: 0x200006400000 with size:    0.959839 MiB
00:05:05.122      element at address: 0x200012c00000 with size:    0.954285 MiB
00:05:05.122      element at address: 0x200018e00000 with size:    0.936584 MiB
00:05:05.122      element at address: 0x200000200000 with size:    0.717346 MiB
00:05:05.122      element at address: 0x20001a600000 with size:    0.567322 MiB
00:05:05.122      element at address: 0x20000a600000 with size:    0.488892 MiB
00:05:05.122      element at address: 0x200000c00000 with size:    0.487000 MiB
00:05:05.122      element at address: 0x200019000000 with size:    0.485657 MiB
00:05:05.122      element at address: 0x200003e00000 with size:    0.480286 MiB
00:05:05.122      element at address: 0x200027a00000 with size:    0.396484 MiB
00:05:05.122      element at address: 0x200000800000 with size:    0.351746 MiB
00:05:05.122    list of standard malloc elements. size: 199.267517 MiB
00:05:05.122      element at address: 0x20000a7fff80 with size:  132.000122 MiB
00:05:05.122      element at address: 0x2000065fff80 with size:   64.000122 MiB
00:05:05.122      element at address: 0x200018afff80 with size:    1.000122 MiB
00:05:05.122      element at address: 0x200018cfff80 with size:    1.000122 MiB
00:05:05.122      element at address: 0x200018efff80 with size:    1.000122 MiB
00:05:05.122      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:05:05.122      element at address: 0x200018eeff00 with size:    0.062622 MiB
00:05:05.122      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:05:05.122      element at address: 0x200018eefdc0 with size:    0.000305 MiB
00:05:05.122      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fe740 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fe800 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fe8c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fe980 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fea40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004feb00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004febc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fec80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fed40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fee00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004feec0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004fef80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff040 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff100 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff1c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff280 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff340 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff400 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff4c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff580 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff640 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff700 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff7c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff880 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ff940 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ffcc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000085a0c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000085a2c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000085e580 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087e840 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087e900 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087e9c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087ea80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087eb40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087ec00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087ecc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087ed80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087ee40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087ef00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087efc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f080 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f140 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f200 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f2c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f380 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f440 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f500 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f5c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000087f680 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000008ff940 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7cac0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7cb80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7cc40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7cd00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7cdc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7ce80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7cf40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d000 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d0c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d180 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d240 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d300 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d3c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d480 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d540 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d600 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d6c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d780 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d840 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d900 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7d9c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7da80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7db40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7dc00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7dcc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7dd80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7de40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7df00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7dfc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e080 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e140 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e200 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e2c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e380 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e440 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e500 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e5c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e680 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e740 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e800 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e8c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7e980 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7ea40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7eb00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7ebc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7ec80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000cff000 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7af40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b000 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b0c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b180 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b240 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b300 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b3c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b480 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b540 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b600 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003e7b6c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200003efb980 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000064fdd80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d280 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d340 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d400 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d4c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d580 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d640 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d700 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d7c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d880 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67d940 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67da00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a67dac0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20000a6fdd80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200012cf44c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200018eefc40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x200018eefd00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x2000190bc740 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a6913c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691480 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691540 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691600 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a6916c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691780 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691840 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691900 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a6919c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691a80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691b40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691c00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691cc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691d80 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691e40 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691f00 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a691fc0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692080 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692140 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692200 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a6922c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692380 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692440 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692500 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a6925c0 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692680 with size:    0.000183 MiB
00:05:05.122      element at address: 0x20001a692740 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692800 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6928c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692980 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692a40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692b00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692bc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692c80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692d40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692e00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692ec0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a692f80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693040 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693100 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6931c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693280 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693340 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693400 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6934c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693580 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693640 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693700 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6937c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693880 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693940 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693a00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693ac0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693b80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693c40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693d00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693dc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693e80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a693f40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694000 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6940c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694180 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694240 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694300 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6943c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694480 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694540 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694600 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6946c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694780 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694840 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694900 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6949c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694a80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694b40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694c00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694cc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694d80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694e40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694f00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a694fc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a695080 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a695140 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a695200 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a6952c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a695380 with size:    0.000183 MiB
00:05:05.123      element at address: 0x20001a695440 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a65800 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a658c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6c4c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6c6c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6c780 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6c840 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6c900 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6c9c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ca80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6cb40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6cc00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ccc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6cd80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ce40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6cf00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6cfc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d080 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d140 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d200 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d2c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d380 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d440 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d500 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d5c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d680 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d740 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d800 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d8c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6d980 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6da40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6db00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6dbc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6dc80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6dd40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6de00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6dec0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6df80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e040 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e100 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e1c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e280 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e340 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e400 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e4c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e580 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e640 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e700 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e7c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e880 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6e940 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ea00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6eac0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6eb80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ec40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ed00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6edc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ee80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ef40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f000 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f0c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f180 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f240 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f300 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f3c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f480 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f540 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f600 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f6c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f780 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f840 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f900 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6f9c0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6fa80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6fb40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6fc00 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6fcc0 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6fd80 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6fe40 with size:    0.000183 MiB
00:05:05.123      element at address: 0x200027a6ff00 with size:    0.000183 MiB
00:05:05.123    list of memzone associated elements. size: 599.918884 MiB
00:05:05.123      element at address: 0x20001a695500 with size:  211.416748 MiB
00:05:05.123        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:05:05.123      element at address: 0x200027a6ffc0 with size:  157.562561 MiB
00:05:05.123        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:05:05.123      element at address: 0x200012df4780 with size:   92.045044 MiB
00:05:05.123        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_57442_0
00:05:05.123      element at address: 0x200000dff380 with size:   48.003052 MiB
00:05:05.123        associated memzone info: size:   48.002930 MiB name: MP_msgpool_57442_0
00:05:05.123      element at address: 0x200003ffdb80 with size:   36.008911 MiB
00:05:05.123        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_57442_0
00:05:05.123      element at address: 0x2000191be940 with size:   20.255554 MiB
00:05:05.123        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:05:05.123      element at address: 0x2000319feb40 with size:   18.005066 MiB
00:05:05.123        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:05:05.123      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:05:05.123        associated memzone info: size:    3.000122 MiB name: MP_evtpool_57442_0
00:05:05.123      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:05:05.123        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_57442
00:05:05.123      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:05:05.123        associated memzone info: size:    1.007996 MiB name: MP_evtpool_57442
00:05:05.123      element at address: 0x20000a6fde40 with size:    1.008118 MiB
00:05:05.123        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:05:05.123      element at address: 0x2000190bc800 with size:    1.008118 MiB
00:05:05.123        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:05:05.123      element at address: 0x2000064fde40 with size:    1.008118 MiB
00:05:05.123        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:05:05.123      element at address: 0x200003efba40 with size:    1.008118 MiB
00:05:05.123        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:05:05.123      element at address: 0x200000cff180 with size:    1.000488 MiB
00:05:05.123        associated memzone info: size:    1.000366 MiB name: RG_ring_0_57442
00:05:05.123      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:05:05.123        associated memzone info: size:    1.000366 MiB name: RG_ring_1_57442
00:05:05.123      element at address: 0x200012cf4580 with size:    1.000488 MiB
00:05:05.123        associated memzone info: size:    1.000366 MiB name: RG_ring_4_57442
00:05:05.123      element at address: 0x2000318fe940 with size:    1.000488 MiB
00:05:05.123        associated memzone info: size:    1.000366 MiB name: RG_ring_5_57442
00:05:05.123      element at address: 0x20000087f740 with size:    0.500488 MiB
00:05:05.123        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_57442
00:05:05.123      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:05:05.123        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_57442
00:05:05.123      element at address: 0x20000a67db80 with size:    0.500488 MiB
00:05:05.123        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:05:05.123      element at address: 0x200003e7b780 with size:    0.500488 MiB
00:05:05.123        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:05:05.123      element at address: 0x20001907c540 with size:    0.250488 MiB
00:05:05.123        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:05:05.123      element at address: 0x2000002b7a40 with size:    0.125488 MiB
00:05:05.123        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_57442
00:05:05.123      element at address: 0x20000085e640 with size:    0.125488 MiB
00:05:05.123        associated memzone info: size:    0.125366 MiB name: RG_ring_2_57442
00:05:05.123      element at address: 0x2000064f5b80 with size:    0.031738 MiB
00:05:05.123        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:05:05.123      element at address: 0x200027a65980 with size:    0.023743 MiB
00:05:05.123        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:05:05.123      element at address: 0x20000085a380 with size:    0.016113 MiB
00:05:05.123        associated memzone info: size:    0.015991 MiB name: RG_ring_3_57442
00:05:05.123      element at address: 0x200027a6bac0 with size:    0.002441 MiB
00:05:05.123        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:05:05.123      element at address: 0x2000004ffb80 with size:    0.000305 MiB
00:05:05.123        associated memzone info: size:    0.000183 MiB name: MP_msgpool_57442
00:05:05.123      element at address: 0x2000008ffa00 with size:    0.000305 MiB
00:05:05.123        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_57442
00:05:05.123      element at address: 0x20000085a180 with size:    0.000305 MiB
00:05:05.123        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_57442
00:05:05.123      element at address: 0x200027a6c580 with size:    0.000305 MiB
00:05:05.123        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:05:05.123   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:05:05.123   07:49:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57442
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57442 ']'
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57442
00:05:05.123    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:05.123    07:49:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57442
00:05:05.123  killing process with pid 57442
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57442'
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57442
00:05:05.123   07:49:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57442
00:05:05.382  ************************************
00:05:05.382  END TEST dpdk_mem_utility
00:05:05.382  ************************************
00:05:05.382  
00:05:05.382  real	0m1.285s
00:05:05.382  user	0m1.248s
00:05:05.382  sys	0m0.414s
00:05:05.382   07:49:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:05.382   07:49:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:05.382   07:49:21  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:05:05.382   07:49:21  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.382   07:49:21  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.382   07:49:21  -- common/autotest_common.sh@10 -- # set +x
00:05:05.639  ************************************
00:05:05.639  START TEST event
00:05:05.639  ************************************
00:05:05.639   07:49:21 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:05:05.639  * Looking for test storage...
00:05:05.639  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:05.639     07:49:21 event -- common/autotest_common.sh@1693 -- # lcov --version
00:05:05.639     07:49:21 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:05.639    07:49:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:05.639    07:49:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:05.639    07:49:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:05.639    07:49:21 event -- scripts/common.sh@336 -- # IFS=.-:
00:05:05.639    07:49:21 event -- scripts/common.sh@336 -- # read -ra ver1
00:05:05.639    07:49:21 event -- scripts/common.sh@337 -- # IFS=.-:
00:05:05.639    07:49:21 event -- scripts/common.sh@337 -- # read -ra ver2
00:05:05.639    07:49:21 event -- scripts/common.sh@338 -- # local 'op=<'
00:05:05.639    07:49:21 event -- scripts/common.sh@340 -- # ver1_l=2
00:05:05.639    07:49:21 event -- scripts/common.sh@341 -- # ver2_l=1
00:05:05.639    07:49:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:05.639    07:49:21 event -- scripts/common.sh@344 -- # case "$op" in
00:05:05.639    07:49:21 event -- scripts/common.sh@345 -- # : 1
00:05:05.639    07:49:21 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:05.639    07:49:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:05.639     07:49:21 event -- scripts/common.sh@365 -- # decimal 1
00:05:05.639     07:49:21 event -- scripts/common.sh@353 -- # local d=1
00:05:05.639     07:49:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:05.639     07:49:21 event -- scripts/common.sh@355 -- # echo 1
00:05:05.639    07:49:21 event -- scripts/common.sh@365 -- # ver1[v]=1
00:05:05.639     07:49:21 event -- scripts/common.sh@366 -- # decimal 2
00:05:05.639     07:49:21 event -- scripts/common.sh@353 -- # local d=2
00:05:05.639     07:49:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:05.639     07:49:21 event -- scripts/common.sh@355 -- # echo 2
00:05:05.639    07:49:21 event -- scripts/common.sh@366 -- # ver2[v]=2
00:05:05.639    07:49:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:05.639    07:49:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:05.639    07:49:21 event -- scripts/common.sh@368 -- # return 0
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:05.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.639  		--rc genhtml_branch_coverage=1
00:05:05.639  		--rc genhtml_function_coverage=1
00:05:05.639  		--rc genhtml_legend=1
00:05:05.639  		--rc geninfo_all_blocks=1
00:05:05.639  		--rc geninfo_unexecuted_blocks=1
00:05:05.639  		
00:05:05.639  		'
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:05.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.639  		--rc genhtml_branch_coverage=1
00:05:05.639  		--rc genhtml_function_coverage=1
00:05:05.639  		--rc genhtml_legend=1
00:05:05.639  		--rc geninfo_all_blocks=1
00:05:05.639  		--rc geninfo_unexecuted_blocks=1
00:05:05.639  		
00:05:05.639  		'
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:05.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.639  		--rc genhtml_branch_coverage=1
00:05:05.639  		--rc genhtml_function_coverage=1
00:05:05.639  		--rc genhtml_legend=1
00:05:05.639  		--rc geninfo_all_blocks=1
00:05:05.639  		--rc geninfo_unexecuted_blocks=1
00:05:05.639  		
00:05:05.639  		'
00:05:05.639    07:49:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:05.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.639  		--rc genhtml_branch_coverage=1
00:05:05.639  		--rc genhtml_function_coverage=1
00:05:05.639  		--rc genhtml_legend=1
00:05:05.639  		--rc geninfo_all_blocks=1
00:05:05.639  		--rc geninfo_unexecuted_blocks=1
00:05:05.639  		
00:05:05.639  		'
00:05:05.639   07:49:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:05:05.639    07:49:21 event -- bdev/nbd_common.sh@6 -- # set -e
00:05:05.639   07:49:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:05.639   07:49:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:05:05.639   07:49:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.639   07:49:21 event -- common/autotest_common.sh@10 -- # set +x
00:05:05.639  ************************************
00:05:05.639  START TEST event_perf
00:05:05.639  ************************************
00:05:05.639   07:49:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:05.639  Running I/O for 1 seconds...[2024-11-20 07:49:21.647499] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:05.639  [2024-11-20 07:49:21.647600] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57521 ]
00:05:05.896  [2024-11-20 07:49:21.781474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:05.896  [2024-11-20 07:49:21.859008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:05.896  [2024-11-20 07:49:21.859151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:05.896  [2024-11-20 07:49:21.859296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:05.896  Running I/O for 1 seconds...[2024-11-20 07:49:21.859296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:07.271  
00:05:07.271  lcore  0:   300227
00:05:07.271  lcore  1:   300227
00:05:07.271  lcore  2:   300229
00:05:07.271  lcore  3:   300229
00:05:07.271  done.
00:05:07.271  
00:05:07.271  real	0m1.284s
00:05:07.271  user	0m4.095s
00:05:07.271  sys	0m0.061s
00:05:07.271   07:49:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:07.271   07:49:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:05:07.271  ************************************
00:05:07.271  END TEST event_perf
00:05:07.271  ************************************
00:05:07.271   07:49:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:05:07.271   07:49:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:07.271   07:49:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:07.271   07:49:22 event -- common/autotest_common.sh@10 -- # set +x
00:05:07.271  ************************************
00:05:07.271  START TEST event_reactor
00:05:07.271  ************************************
00:05:07.271   07:49:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:05:07.271  [2024-11-20 07:49:22.983648] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:07.271  [2024-11-20 07:49:22.983744] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57554 ]
00:05:07.271  [2024-11-20 07:49:23.118194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:07.271  [2024-11-20 07:49:23.181639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:08.221  test_start
00:05:08.222  oneshot
00:05:08.222  tick 100
00:05:08.222  tick 100
00:05:08.222  tick 250
00:05:08.222  tick 100
00:05:08.222  tick 100
00:05:08.222  tick 250
00:05:08.222  tick 100
00:05:08.222  tick 500
00:05:08.222  tick 100
00:05:08.222  tick 100
00:05:08.222  tick 250
00:05:08.222  tick 100
00:05:08.222  tick 100
00:05:08.222  test_end
00:05:08.222  
00:05:08.222  real	0m1.281s
00:05:08.222  user	0m1.129s
00:05:08.222  sys	0m0.046s
00:05:08.222   07:49:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:08.222  ************************************
00:05:08.222  END TEST event_reactor
00:05:08.222  ************************************
00:05:08.222   07:49:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:05:08.481   07:49:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:08.481   07:49:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:08.481   07:49:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:08.481   07:49:24 event -- common/autotest_common.sh@10 -- # set +x
00:05:08.481  ************************************
00:05:08.481  START TEST event_reactor_perf
00:05:08.481  ************************************
00:05:08.481   07:49:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:08.481  [2024-11-20 07:49:24.318283] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:08.481  [2024-11-20 07:49:24.318402] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57590 ]
00:05:08.481  [2024-11-20 07:49:24.455018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:08.740  [2024-11-20 07:49:24.542735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:09.673  test_start
00:05:09.673  test_end
00:05:09.673  Performance:   830639 events per second
00:05:09.673  
00:05:09.673  real	0m1.319s
00:05:09.673  user	0m1.152s
00:05:09.673  sys	0m0.058s
00:05:09.673  ************************************
00:05:09.673  END TEST event_reactor_perf
00:05:09.673  ************************************
00:05:09.673   07:49:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:09.673   07:49:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:05:09.673    07:49:25 event -- event/event.sh@49 -- # uname -s
00:05:09.673   07:49:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:05:09.673   07:49:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:05:09.673   07:49:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:09.673   07:49:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:09.673   07:49:25 event -- common/autotest_common.sh@10 -- # set +x
00:05:09.673  ************************************
00:05:09.673  START TEST event_scheduler
00:05:09.673  ************************************
00:05:09.673   07:49:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:05:09.930  * Looking for test storage...
00:05:09.930  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:09.930     07:49:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:05:09.930     07:49:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:09.930     07:49:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:09.930    07:49:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:09.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:09.930  		--rc genhtml_branch_coverage=1
00:05:09.930  		--rc genhtml_function_coverage=1
00:05:09.930  		--rc genhtml_legend=1
00:05:09.930  		--rc geninfo_all_blocks=1
00:05:09.930  		--rc geninfo_unexecuted_blocks=1
00:05:09.930  		
00:05:09.930  		'
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:09.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:09.930  		--rc genhtml_branch_coverage=1
00:05:09.930  		--rc genhtml_function_coverage=1
00:05:09.930  		--rc genhtml_legend=1
00:05:09.930  		--rc geninfo_all_blocks=1
00:05:09.930  		--rc geninfo_unexecuted_blocks=1
00:05:09.930  		
00:05:09.930  		'
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:09.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:09.930  		--rc genhtml_branch_coverage=1
00:05:09.930  		--rc genhtml_function_coverage=1
00:05:09.930  		--rc genhtml_legend=1
00:05:09.930  		--rc geninfo_all_blocks=1
00:05:09.930  		--rc geninfo_unexecuted_blocks=1
00:05:09.930  		
00:05:09.930  		'
00:05:09.930    07:49:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:09.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:09.930  		--rc genhtml_branch_coverage=1
00:05:09.930  		--rc genhtml_function_coverage=1
00:05:09.930  		--rc genhtml_legend=1
00:05:09.930  		--rc geninfo_all_blocks=1
00:05:09.930  		--rc geninfo_unexecuted_blocks=1
00:05:09.930  		
00:05:09.930  		'
00:05:09.930   07:49:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:05:09.930   07:49:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57659
00:05:09.930   07:49:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:05:09.930   07:49:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57659
00:05:09.930   07:49:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:05:09.930   07:49:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57659 ']'
00:05:09.930   07:49:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:09.930   07:49:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:09.930   07:49:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:09.930  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:09.930   07:49:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:09.930   07:49:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:09.930  [2024-11-20 07:49:25.916664] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:09.930  [2024-11-20 07:49:25.917002] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57659 ]
00:05:10.189  [2024-11-20 07:49:26.052061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:10.189  [2024-11-20 07:49:26.128359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:10.189  [2024-11-20 07:49:26.128500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:10.189  [2024-11-20 07:49:26.128588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:10.189  [2024-11-20 07:49:26.128591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:05:10.189   07:49:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:10.189  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:05:10.189  POWER: Cannot set governor of lcore 0 to userspace
00:05:10.189  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:05:10.189  POWER: Cannot set governor of lcore 0 to performance
00:05:10.189  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:05:10.189  POWER: Cannot set governor of lcore 0 to userspace
00:05:10.189  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:05:10.189  POWER: Cannot set governor of lcore 0 to userspace
00:05:10.189  GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0
00:05:10.189  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:05:10.189  POWER: Unable to set Power Management Environment for lcore 0
00:05:10.189  [2024-11-20 07:49:26.178794] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:05:10.189  [2024-11-20 07:49:26.178811] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:05:10.189  [2024-11-20 07:49:26.178822] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:05:10.189  [2024-11-20 07:49:26.178837] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:05:10.189  [2024-11-20 07:49:26.178847] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:05:10.189  [2024-11-20 07:49:26.178856] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.189   07:49:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.189   07:49:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:10.447  [2024-11-20 07:49:26.282633] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:05:10.447   07:49:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.447   07:49:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:05:10.447   07:49:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:10.447   07:49:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:10.447   07:49:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:10.447  ************************************
00:05:10.447  START TEST scheduler_create_thread
00:05:10.447  ************************************
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.447  2
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.447  3
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.447  4
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.447  5
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.447   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448  6
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448  7
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448  8
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448  9
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448  10
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.448   07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.448    07:49:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:11.822    07:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:11.822   07:49:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:05:11.822   07:49:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:05:11.822   07:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:11.822   07:49:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:13.195  ************************************
00:05:13.195  END TEST scheduler_create_thread
00:05:13.195  ************************************
00:05:13.195   07:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:13.195  
00:05:13.195  real	0m2.616s
00:05:13.195  user	0m0.021s
00:05:13.195  sys	0m0.006s
00:05:13.195   07:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:13.195   07:49:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:13.195   07:49:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:05:13.196   07:49:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57659
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57659 ']'
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57659
00:05:13.196    07:49:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:13.196    07:49:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57659
00:05:13.196  killing process with pid 57659
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57659'
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57659
00:05:13.196   07:49:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57659
00:05:13.454  [2024-11-20 07:49:29.391629] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:05:13.712  
00:05:13.712  real	0m4.008s
00:05:13.712  user	0m5.885s
00:05:13.712  sys	0m0.342s
00:05:13.712   07:49:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:13.712   07:49:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:13.712  ************************************
00:05:13.712  END TEST event_scheduler
00:05:13.712  ************************************
00:05:13.712   07:49:29 event -- event/event.sh@51 -- # modprobe -n nbd
00:05:13.712   07:49:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:05:13.712   07:49:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:13.712   07:49:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:13.712   07:49:29 event -- common/autotest_common.sh@10 -- # set +x
00:05:13.712  ************************************
00:05:13.712  START TEST app_repeat
00:05:13.712  ************************************
00:05:13.712   07:49:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:05:13.712   07:49:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:05:13.972  Process app_repeat pid: 57746
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57746
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57746'
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:05:13.972  spdk_app_start Round 0
00:05:13.972  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:05:13.972   07:49:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57746 /var/tmp/spdk-nbd.sock
00:05:13.972   07:49:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57746 ']'
00:05:13.972   07:49:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:13.972   07:49:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:13.972   07:49:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:13.972   07:49:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:13.972   07:49:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:13.972  [2024-11-20 07:49:29.771458] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:13.972  [2024-11-20 07:49:29.771766] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57746 ]
00:05:13.972  [2024-11-20 07:49:29.902112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:13.972  [2024-11-20 07:49:29.974670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:13.972  [2024-11-20 07:49:29.974681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:14.230   07:49:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:14.230   07:49:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:14.230   07:49:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:14.490  Malloc0
00:05:14.490   07:49:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:14.748  Malloc1
00:05:14.748   07:49:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:14.748   07:49:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:15.007  /dev/nbd0
00:05:15.007    07:49:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:15.007   07:49:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:15.007   07:49:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:15.265  1+0 records in
00:05:15.265  1+0 records out
00:05:15.265  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319496 s, 12.8 MB/s
00:05:15.265    07:49:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:15.265   07:49:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:15.265   07:49:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:15.265   07:49:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:15.265   07:49:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:15.265   07:49:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:15.265   07:49:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:15.265   07:49:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:15.533  /dev/nbd1
00:05:15.533    07:49:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:15.533   07:49:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:15.533  1+0 records in
00:05:15.533  1+0 records out
00:05:15.533  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349642 s, 11.7 MB/s
00:05:15.533    07:49:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:15.533   07:49:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:15.533   07:49:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:15.533   07:49:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:15.533    07:49:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:15.533    07:49:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:15.534     07:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:15.793    07:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:15.793    {
00:05:15.793      "nbd_device": "/dev/nbd0",
00:05:15.793      "bdev_name": "Malloc0"
00:05:15.793    },
00:05:15.793    {
00:05:15.793      "nbd_device": "/dev/nbd1",
00:05:15.793      "bdev_name": "Malloc1"
00:05:15.793    }
00:05:15.793  ]'
00:05:15.793     07:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:05:15.793    {
00:05:15.793      "nbd_device": "/dev/nbd0",
00:05:15.793      "bdev_name": "Malloc0"
00:05:15.793    },
00:05:15.793    {
00:05:15.793      "nbd_device": "/dev/nbd1",
00:05:15.793      "bdev_name": "Malloc1"
00:05:15.793    }
00:05:15.793  ]'
00:05:15.793     07:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:15.793    07:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:15.793  /dev/nbd1'
00:05:15.793     07:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:15.793  /dev/nbd1'
00:05:15.793     07:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:15.793    07:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:05:15.793    07:49:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:15.793  256+0 records in
00:05:15.793  256+0 records out
00:05:15.793  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00850988 s, 123 MB/s
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:15.793  256+0 records in
00:05:15.793  256+0 records out
00:05:15.793  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206117 s, 50.9 MB/s
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:15.793  256+0 records in
00:05:15.793  256+0 records out
00:05:15.793  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236994 s, 44.2 MB/s
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:15.793   07:49:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:15.794   07:49:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:16.053    07:49:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:16.053   07:49:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:16.620    07:49:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:16.621   07:49:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:16.621    07:49:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:16.621    07:49:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:16.621     07:49:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:16.879    07:49:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:16.879     07:49:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:16.879     07:49:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:16.879    07:49:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:16.879     07:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:05:16.879     07:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:16.879     07:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:05:16.879    07:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:05:16.879    07:49:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:05:16.879   07:49:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:05:16.879   07:49:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:16.879   07:49:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:05:16.879   07:49:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:17.139   07:49:33 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:05:17.398  [2024-11-20 07:49:33.296582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:17.398  [2024-11-20 07:49:33.358410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:17.398  [2024-11-20 07:49:33.358421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:17.398  [2024-11-20 07:49:33.411146] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:17.398  [2024-11-20 07:49:33.411213] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:20.687   07:49:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:05:20.687  spdk_app_start Round 1
00:05:20.687   07:49:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:05:20.687   07:49:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57746 /var/tmp/spdk-nbd.sock
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57746 ']'
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:20.687  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:20.687   07:49:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:20.687   07:49:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:20.946  Malloc0
00:05:20.946   07:49:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:21.205  Malloc1
00:05:21.205   07:49:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:21.205   07:49:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:21.464  /dev/nbd0
00:05:21.464    07:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:21.464   07:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:21.464  1+0 records in
00:05:21.464  1+0 records out
00:05:21.464  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333409 s, 12.3 MB/s
00:05:21.464    07:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:21.464   07:49:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:21.464   07:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:21.464   07:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:21.464   07:49:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:21.723  /dev/nbd1
00:05:21.981    07:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:21.981   07:49:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:21.981  1+0 records in
00:05:21.981  1+0 records out
00:05:21.981  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306951 s, 13.3 MB/s
00:05:21.981    07:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:21.981   07:49:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:21.981   07:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:21.981   07:49:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:21.981    07:49:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:21.981    07:49:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:21.981     07:49:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:22.239    07:49:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:22.239    {
00:05:22.239      "nbd_device": "/dev/nbd0",
00:05:22.239      "bdev_name": "Malloc0"
00:05:22.239    },
00:05:22.239    {
00:05:22.239      "nbd_device": "/dev/nbd1",
00:05:22.239      "bdev_name": "Malloc1"
00:05:22.239    }
00:05:22.239  ]'
00:05:22.239     07:49:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:05:22.239    {
00:05:22.239      "nbd_device": "/dev/nbd0",
00:05:22.239      "bdev_name": "Malloc0"
00:05:22.239    },
00:05:22.239    {
00:05:22.239      "nbd_device": "/dev/nbd1",
00:05:22.239      "bdev_name": "Malloc1"
00:05:22.239    }
00:05:22.239  ]'
00:05:22.239     07:49:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:22.239    07:49:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:22.239  /dev/nbd1'
00:05:22.239     07:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:22.239  /dev/nbd1'
00:05:22.239     07:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:22.239    07:49:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:05:22.239    07:49:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:22.239  256+0 records in
00:05:22.239  256+0 records out
00:05:22.239  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107928 s, 97.2 MB/s
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:22.239   07:49:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:22.239  256+0 records in
00:05:22.239  256+0 records out
00:05:22.239  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267583 s, 39.2 MB/s
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:22.240  256+0 records in
00:05:22.240  256+0 records out
00:05:22.240  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234328 s, 44.7 MB/s
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:05:22.240   07:49:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:22.498   07:49:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:22.756    07:49:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:22.756   07:49:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:23.014    07:49:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:23.014   07:49:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:23.014   07:49:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:23.015   07:49:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:23.015   07:49:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:23.015   07:49:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:23.015   07:49:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:23.015   07:49:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:23.015    07:49:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:23.015    07:49:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:23.015     07:49:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:23.273    07:49:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:23.273     07:49:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:23.273     07:49:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:23.273    07:49:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:23.273     07:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:05:23.273     07:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:23.273     07:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:05:23.273    07:49:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:05:23.273    07:49:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:05:23.273   07:49:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:05:23.273   07:49:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:23.273   07:49:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:05:23.273   07:49:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:23.840   07:49:39 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:05:23.840  [2024-11-20 07:49:39.765951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:23.840  [2024-11-20 07:49:39.830944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:23.840  [2024-11-20 07:49:39.830954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:24.098  [2024-11-20 07:49:39.885876] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:24.098  [2024-11-20 07:49:39.885951] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:26.628   07:49:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:05:26.628  spdk_app_start Round 2
00:05:26.628   07:49:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:05:26.628   07:49:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57746 /var/tmp/spdk-nbd.sock
00:05:26.628   07:49:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57746 ']'
00:05:26.628   07:49:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:26.628   07:49:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:26.628  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:26.628   07:49:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:26.628   07:49:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:26.628   07:49:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:26.887   07:49:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:26.887   07:49:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:26.887   07:49:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:27.146  Malloc0
00:05:27.405   07:49:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:27.664  Malloc1
00:05:27.664   07:49:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:27.664   07:49:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:27.923  /dev/nbd0
00:05:27.923    07:49:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:27.923   07:49:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:27.923  1+0 records in
00:05:27.923  1+0 records out
00:05:27.923  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324624 s, 12.6 MB/s
00:05:27.923    07:49:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:27.923   07:49:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:27.923   07:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:27.923   07:49:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:27.923   07:49:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:28.182  /dev/nbd1
00:05:28.182    07:49:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:28.182   07:49:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:28.182   07:49:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:28.182  1+0 records in
00:05:28.182  1+0 records out
00:05:28.183  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313736 s, 13.1 MB/s
00:05:28.183    07:49:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:28.183   07:49:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:28.183   07:49:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:05:28.183   07:49:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:28.183   07:49:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:28.183   07:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:28.183   07:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:28.183    07:49:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:28.183    07:49:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:28.183     07:49:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:28.750    07:49:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:28.750    {
00:05:28.750      "nbd_device": "/dev/nbd0",
00:05:28.750      "bdev_name": "Malloc0"
00:05:28.750    },
00:05:28.750    {
00:05:28.750      "nbd_device": "/dev/nbd1",
00:05:28.750      "bdev_name": "Malloc1"
00:05:28.750    }
00:05:28.750  ]'
00:05:28.750     07:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:05:28.750    {
00:05:28.750      "nbd_device": "/dev/nbd0",
00:05:28.750      "bdev_name": "Malloc0"
00:05:28.750    },
00:05:28.750    {
00:05:28.750      "nbd_device": "/dev/nbd1",
00:05:28.750      "bdev_name": "Malloc1"
00:05:28.750    }
00:05:28.750  ]'
00:05:28.750     07:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:28.750    07:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:28.750  /dev/nbd1'
00:05:28.750     07:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:28.750  /dev/nbd1'
00:05:28.750     07:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:28.750    07:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:05:28.750    07:49:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:28.750  256+0 records in
00:05:28.750  256+0 records out
00:05:28.750  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766589 s, 137 MB/s
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:28.750  256+0 records in
00:05:28.750  256+0 records out
00:05:28.750  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182772 s, 57.4 MB/s
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:28.750   07:49:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:28.750  256+0 records in
00:05:28.750  256+0 records out
00:05:28.750  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219226 s, 47.8 MB/s
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:28.751   07:49:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:29.009    07:49:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:29.009   07:49:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:29.268    07:49:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:29.268   07:49:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:29.268    07:49:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:29.268    07:49:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:29.268     07:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:29.526    07:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:29.526     07:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:29.526     07:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:29.784    07:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:29.784     07:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:05:29.784     07:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:29.784     07:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:05:29.784    07:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:05:29.784    07:49:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:05:29.784   07:49:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:05:29.784   07:49:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:29.784   07:49:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:05:29.784   07:49:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:30.042   07:49:45 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:05:30.300  [2024-11-20 07:49:46.134288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:30.300  [2024-11-20 07:49:46.198140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:30.300  [2024-11-20 07:49:46.198149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:30.300  [2024-11-20 07:49:46.251260] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:30.300  [2024-11-20 07:49:46.251332] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:33.584  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:33.584   07:49:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57746 /var/tmp/spdk-nbd.sock
00:05:33.584   07:49:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57746 ']'
00:05:33.584   07:49:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:33.584   07:49:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:33.584   07:49:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:33.584   07:49:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:33.584   07:49:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:33.584   07:49:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:33.584   07:49:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:33.584   07:49:49 event.app_repeat -- event/event.sh@39 -- # killprocess 57746
00:05:33.584   07:49:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57746 ']'
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57746
00:05:33.585    07:49:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:33.585    07:49:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57746
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:33.585  killing process with pid 57746
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57746'
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57746
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57746
00:05:33.585  spdk_app_start is called in Round 0.
00:05:33.585  Shutdown signal received, stop current app iteration
00:05:33.585  Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 reinitialization...
00:05:33.585  spdk_app_start is called in Round 1.
00:05:33.585  Shutdown signal received, stop current app iteration
00:05:33.585  Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 reinitialization...
00:05:33.585  spdk_app_start is called in Round 2.
00:05:33.585  Shutdown signal received, stop current app iteration
00:05:33.585  Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 reinitialization...
00:05:33.585  spdk_app_start is called in Round 3.
00:05:33.585  Shutdown signal received, stop current app iteration
00:05:33.585   07:49:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:05:33.585   07:49:49 event.app_repeat -- event/event.sh@42 -- # return 0
00:05:33.585  
00:05:33.585  real	0m19.874s
00:05:33.585  user	0m45.244s
00:05:33.585  sys	0m3.353s
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:33.585   07:49:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:33.585  ************************************
00:05:33.585  END TEST app_repeat
00:05:33.585  ************************************
00:05:33.844   07:49:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:05:33.844   07:49:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:05:33.844   07:49:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:33.844   07:49:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:33.844   07:49:49 event -- common/autotest_common.sh@10 -- # set +x
00:05:33.844  ************************************
00:05:33.844  START TEST cpu_locks
00:05:33.844  ************************************
00:05:33.844   07:49:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:05:33.844  * Looking for test storage...
00:05:33.844  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:33.844     07:49:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:05:33.844     07:49:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:33.844     07:49:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:33.844    07:49:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:33.844  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.844  		--rc genhtml_branch_coverage=1
00:05:33.844  		--rc genhtml_function_coverage=1
00:05:33.844  		--rc genhtml_legend=1
00:05:33.844  		--rc geninfo_all_blocks=1
00:05:33.844  		--rc geninfo_unexecuted_blocks=1
00:05:33.844  		
00:05:33.844  		'
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:33.844  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.844  		--rc genhtml_branch_coverage=1
00:05:33.844  		--rc genhtml_function_coverage=1
00:05:33.844  		--rc genhtml_legend=1
00:05:33.844  		--rc geninfo_all_blocks=1
00:05:33.844  		--rc geninfo_unexecuted_blocks=1
00:05:33.844  		
00:05:33.844  		'
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:33.844  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.844  		--rc genhtml_branch_coverage=1
00:05:33.844  		--rc genhtml_function_coverage=1
00:05:33.844  		--rc genhtml_legend=1
00:05:33.844  		--rc geninfo_all_blocks=1
00:05:33.844  		--rc geninfo_unexecuted_blocks=1
00:05:33.844  		
00:05:33.844  		'
00:05:33.844    07:49:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:33.844  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.844  		--rc genhtml_branch_coverage=1
00:05:33.844  		--rc genhtml_function_coverage=1
00:05:33.844  		--rc genhtml_legend=1
00:05:33.844  		--rc geninfo_all_blocks=1
00:05:33.844  		--rc geninfo_unexecuted_blocks=1
00:05:33.844  		
00:05:33.844  		'
00:05:33.844   07:49:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:05:33.844   07:49:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:05:33.844   07:49:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:05:33.844   07:49:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:05:33.844   07:49:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:33.844   07:49:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:33.844   07:49:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:33.844  ************************************
00:05:33.844  START TEST default_locks
00:05:33.844  ************************************
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58202
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58202
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58202 ']'
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:33.844  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:33.844   07:49:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:05:34.103  [2024-11-20 07:49:49.924205] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:34.103  [2024-11-20 07:49:49.924336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ]
00:05:34.103  [2024-11-20 07:49:50.055907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:34.103  [2024-11-20 07:49:50.140589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:34.690   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:34.690   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:05:34.690   07:49:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58202
00:05:34.690   07:49:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58202
00:05:34.690   07:49:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58202
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58202 ']'
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58202
00:05:34.948    07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:34.948    07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58202
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:34.948  killing process with pid 58202
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58202'
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58202
00:05:34.948   07:49:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58202
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58202
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58202
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:35.513    07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58202
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58202 ']'
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:35.513   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:35.514  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:05:35.514  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58202) - No such process
00:05:35.514  ERROR: process (pid: 58202) is no longer running
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:05:35.514  
00:05:35.514  real	0m1.665s
00:05:35.514  user	0m1.524s
00:05:35.514  sys	0m0.631s
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:35.514   07:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:05:35.514  ************************************
00:05:35.514  END TEST default_locks
00:05:35.514  ************************************
00:05:35.773   07:49:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:05:35.773   07:49:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:35.773   07:49:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:35.773   07:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:35.773  ************************************
00:05:35.773  START TEST default_locks_via_rpc
00:05:35.773  ************************************
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58241
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58241
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58241 ']'
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:35.773  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:35.773   07:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:35.773  [2024-11-20 07:49:51.641698] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:35.773  [2024-11-20 07:49:51.641838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ]
00:05:35.773  [2024-11-20 07:49:51.773840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:36.032  [2024-11-20 07:49:51.852933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58241
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58241
00:05:36.291   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:36.550   07:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58241
00:05:36.550   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58241 ']'
00:05:36.550   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58241
00:05:36.550    07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:05:36.550   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:36.550    07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58241
00:05:36.810   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:36.810  killing process with pid 58241
00:05:36.810   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:36.810   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58241'
00:05:36.810   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58241
00:05:36.810   07:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58241
00:05:37.377  
00:05:37.377  real	0m1.519s
00:05:37.377  user	0m1.382s
00:05:37.377  sys	0m0.604s
00:05:37.377  ************************************
00:05:37.377  END TEST default_locks_via_rpc
00:05:37.377  ************************************
00:05:37.377   07:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:37.377   07:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:37.377   07:49:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:05:37.377   07:49:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:37.377   07:49:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:37.377   07:49:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:37.377  ************************************
00:05:37.377  START TEST non_locking_app_on_locked_coremask
00:05:37.377  ************************************
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58290
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58290 /var/tmp/spdk.sock
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58290 ']'
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:37.377  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:37.377   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:37.377  [2024-11-20 07:49:53.217838] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:37.378  [2024-11-20 07:49:53.217954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ]
00:05:37.378  [2024-11-20 07:49:53.349983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:37.636  [2024-11-20 07:49:53.417370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58293
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58293 /var/tmp/spdk2.sock
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58293 ']'
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:37.896  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:37.896   07:49:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:37.896  [2024-11-20 07:49:53.754106] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:37.896  [2024-11-20 07:49:53.754212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58293 ]
00:05:37.896  [2024-11-20 07:49:53.889436] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:37.896  [2024-11-20 07:49:53.889489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:38.154  [2024-11-20 07:49:54.020047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:39.090   07:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:39.090   07:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:39.090   07:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58290
00:05:39.090   07:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58290
00:05:39.090   07:49:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58290
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58290 ']'
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58290
00:05:40.026    07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:40.026    07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58290
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:40.026  killing process with pid 58290
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58290'
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58290
00:05:40.026   07:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58290
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58293
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58293 ']'
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58293
00:05:40.594    07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:40.594    07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58293
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:40.594  killing process with pid 58293
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58293'
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58293
00:05:40.594   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58293
00:05:41.162  
00:05:41.162  real	0m3.754s
00:05:41.162  user	0m4.120s
00:05:41.162  sys	0m1.149s
00:05:41.162  ************************************
00:05:41.162  END TEST non_locking_app_on_locked_coremask
00:05:41.162  ************************************
00:05:41.162   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:41.162   07:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:41.162   07:49:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:05:41.162   07:49:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:41.162   07:49:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:41.162   07:49:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:41.162  ************************************
00:05:41.162  START TEST locking_app_on_unlocked_coremask
00:05:41.162  ************************************
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58370
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58370 /var/tmp/spdk.sock
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58370 ']'
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:41.162  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:41.162   07:49:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:41.162  [2024-11-20 07:49:57.025919] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:41.162  [2024-11-20 07:49:57.026028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ]
00:05:41.162  [2024-11-20 07:49:57.157830] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:41.162  [2024-11-20 07:49:57.157908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:41.420  [2024-11-20 07:49:57.229746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58374
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58374 /var/tmp/spdk2.sock
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58374 ']'
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:41.680  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:41.680   07:49:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:41.680  [2024-11-20 07:49:57.569185] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:41.680  [2024-11-20 07:49:57.569326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58374 ]
00:05:41.680  [2024-11-20 07:49:57.704096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:41.972  [2024-11-20 07:49:57.832855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:42.543   07:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:42.543   07:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:42.543   07:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58374
00:05:42.543   07:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58374
00:05:42.543   07:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58370
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58370 ']'
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58370
00:05:43.477    07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:43.477    07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58370
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:43.477  killing process with pid 58370
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58370'
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58370
00:05:43.477   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58370
00:05:44.043   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58374
00:05:44.043   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58374 ']'
00:05:44.043   07:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58374
00:05:44.043    07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:44.043   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:44.044    07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58374
00:05:44.044   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:44.044  killing process with pid 58374
00:05:44.044   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:44.044   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58374'
00:05:44.044   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58374
00:05:44.044   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58374
00:05:44.611  
00:05:44.611  real	0m3.439s
00:05:44.611  user	0m3.566s
00:05:44.611  sys	0m1.061s
00:05:44.611   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:44.612  ************************************
00:05:44.612  END TEST locking_app_on_unlocked_coremask
00:05:44.612  ************************************
00:05:44.612   07:50:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:05:44.612   07:50:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:44.612   07:50:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:44.612   07:50:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:44.612  ************************************
00:05:44.612  START TEST locking_app_on_locked_coremask
00:05:44.612  ************************************
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58439
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58439 /var/tmp/spdk.sock
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58439 ']'
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:44.612  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:44.612   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:44.612  [2024-11-20 07:50:00.516741] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:44.612  [2024-11-20 07:50:00.516851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ]
00:05:44.612  [2024-11-20 07:50:00.644729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:44.871  [2024-11-20 07:50:00.709457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58448
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58448 /var/tmp/spdk2.sock
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58448 /var/tmp/spdk2.sock
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:45.130    07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58448 /var/tmp/spdk2.sock
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58448 ']'
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:45.130  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:45.130   07:50:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:45.130  [2024-11-20 07:50:01.039775] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:45.130  [2024-11-20 07:50:01.039908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58448 ]
00:05:45.388  [2024-11-20 07:50:01.172073] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58439 has claimed it.
00:05:45.388  [2024-11-20 07:50:01.172156] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:45.955  ERROR: process (pid: 58448) is no longer running
00:05:45.955  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58448) - No such process
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58439
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58439
00:05:45.955   07:50:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:46.213   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58439
00:05:46.213   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58439 ']'
00:05:46.213   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58439
00:05:46.213    07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:46.214   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:46.214    07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58439
00:05:46.472   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:46.472  killing process with pid 58439
00:05:46.472   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:46.472   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58439'
00:05:46.472   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58439
00:05:46.472   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58439
00:05:46.731  
00:05:46.731  real	0m2.191s
00:05:46.731  user	0m2.480s
00:05:46.731  sys	0m0.612s
00:05:46.731   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:46.731   07:50:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:46.731  ************************************
00:05:46.731  END TEST locking_app_on_locked_coremask
00:05:46.731  ************************************
00:05:46.731   07:50:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:05:46.731   07:50:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:46.731   07:50:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:46.731   07:50:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:46.731  ************************************
00:05:46.731  START TEST locking_overlapped_coremask
00:05:46.731  ************************************
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58493
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58493 /var/tmp/spdk.sock
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58493 ']'
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:46.731  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:46.731   07:50:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:46.731  [2024-11-20 07:50:02.768209] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:46.731  [2024-11-20 07:50:02.768338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58493 ]
00:05:46.990  [2024-11-20 07:50:02.900240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:46.990  [2024-11-20 07:50:02.974975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:46.990  [2024-11-20 07:50:02.975026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:46.990  [2024-11-20 07:50:02.975032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58509
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58509 /var/tmp/spdk2.sock
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58509 /var/tmp/spdk2.sock
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:47.249    07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:47.249  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58509 /var/tmp/spdk2.sock
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58509 ']'
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:47.249   07:50:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:47.508  [2024-11-20 07:50:03.320032] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:47.508  [2024-11-20 07:50:03.320151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58509 ]
00:05:47.508  [2024-11-20 07:50:03.456922] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58493 has claimed it.
00:05:47.508  [2024-11-20 07:50:03.457005] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:48.076  ERROR: process (pid: 58509) is no longer running
00:05:48.076  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58509) - No such process
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58493
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58493 ']'
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58493
00:05:48.076    07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:48.076    07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58493
00:05:48.076  killing process with pid 58493
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58493'
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58493
00:05:48.076   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58493
00:05:48.644  
00:05:48.644  real	0m1.724s
00:05:48.644  user	0m4.674s
00:05:48.644  sys	0m0.425s
00:05:48.644  ************************************
00:05:48.644  END TEST locking_overlapped_coremask
00:05:48.644  ************************************
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:48.644   07:50:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:05:48.644   07:50:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:48.644   07:50:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:48.644   07:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:48.644  ************************************
00:05:48.644  START TEST locking_overlapped_coremask_via_rpc
00:05:48.644  ************************************
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58549
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58549 /var/tmp/spdk.sock
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58549 ']'
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:48.644  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:48.644   07:50:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:48.644  [2024-11-20 07:50:04.543781] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:48.644  [2024-11-20 07:50:04.543890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58549 ]
00:05:48.644  [2024-11-20 07:50:04.673893] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:48.644  [2024-11-20 07:50:04.673950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:48.903  [2024-11-20 07:50:04.742801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:48.903  [2024-11-20 07:50:04.742931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:48.903  [2024-11-20 07:50:04.742936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58560
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58560 /var/tmp/spdk2.sock
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58560 ']'
00:05:49.162   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:49.163   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:49.163  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:49.163   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:49.163   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:49.163   07:50:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:49.163  [2024-11-20 07:50:05.071373] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:49.163  [2024-11-20 07:50:05.071488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58560 ]
00:05:49.423  [2024-11-20 07:50:05.204935] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:49.423  [2024-11-20 07:50:05.204987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:49.423  [2024-11-20 07:50:05.345638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:49.423  [2024-11-20 07:50:05.345754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:05:49.423  [2024-11-20 07:50:05.345755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:50.361    07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:50.361  [2024-11-20 07:50:06.202380] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58549 has claimed it.
00:05:50.361  request:
00:05:50.361  {
00:05:50.361  "method": "framework_enable_cpumask_locks",
00:05:50.361  "req_id": 1
00:05:50.361  }
00:05:50.361  Got JSON-RPC error response
00:05:50.361  response:
00:05:50.361  {
00:05:50.361  "code": -32603,
00:05:50.361  "message": "Failed to claim CPU core: 2"
00:05:50.361  }
00:05:50.361  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58549 /var/tmp/spdk.sock
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58549 ']'
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:50.361   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58560 /var/tmp/spdk2.sock
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58560 ']'
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:50.620  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:50.620   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:50.879  
00:05:50.879  real	0m2.365s
00:05:50.879  user	0m1.387s
00:05:50.879  sys	0m0.179s
00:05:50.879  ************************************
00:05:50.879  END TEST locking_overlapped_coremask_via_rpc
00:05:50.879  ************************************
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:50.879   07:50:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:50.879   07:50:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:05:50.879   07:50:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58549 ]]
00:05:50.879   07:50:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58549
00:05:50.879   07:50:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58549 ']'
00:05:50.879   07:50:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58549
00:05:50.879    07:50:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:50.879   07:50:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:50.879    07:50:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58549
00:05:51.138  killing process with pid 58549
00:05:51.138   07:50:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:51.138   07:50:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:51.138   07:50:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58549'
00:05:51.138   07:50:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58549
00:05:51.138   07:50:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58549
00:05:51.397   07:50:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58560 ]]
00:05:51.397   07:50:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58560
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58560 ']'
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58560
00:05:51.397    07:50:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:51.397    07:50:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58560
00:05:51.397  killing process with pid 58560
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58560'
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58560
00:05:51.397   07:50:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58560
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:51.974  Process with pid 58549 is not found
00:05:51.974  Process with pid 58560 is not found
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58549 ]]
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58549
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58549 ']'
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58549
00:05:51.974  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58549) - No such process
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58549 is not found'
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58560 ]]
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58560
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58560 ']'
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58560
00:05:51.974  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58560) - No such process
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58560 is not found'
00:05:51.974   07:50:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:51.974  
00:05:51.974  real	0m18.099s
00:05:51.974  user	0m31.737s
00:05:51.974  sys	0m5.595s
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:51.974   07:50:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:51.974  ************************************
00:05:51.974  END TEST cpu_locks
00:05:51.974  ************************************
00:05:51.974  
00:05:51.974  real	0m46.387s
00:05:51.974  user	1m29.459s
00:05:51.974  sys	0m9.734s
00:05:51.974   07:50:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:51.974   07:50:07 event -- common/autotest_common.sh@10 -- # set +x
00:05:51.974  ************************************
00:05:51.974  END TEST event
00:05:51.974  ************************************
00:05:51.974   07:50:07  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:51.974   07:50:07  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:51.974   07:50:07  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:51.974   07:50:07  -- common/autotest_common.sh@10 -- # set +x
00:05:51.974  ************************************
00:05:51.974  START TEST thread
00:05:51.974  ************************************
00:05:51.974   07:50:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:51.974  * Looking for test storage...
00:05:51.974  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:05:51.974    07:50:07 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:51.974     07:50:07 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:05:51.974     07:50:07 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:52.233    07:50:08 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:52.233    07:50:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:52.233    07:50:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:52.233    07:50:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:52.233    07:50:08 thread -- scripts/common.sh@336 -- # IFS=.-:
00:05:52.233    07:50:08 thread -- scripts/common.sh@336 -- # read -ra ver1
00:05:52.233    07:50:08 thread -- scripts/common.sh@337 -- # IFS=.-:
00:05:52.233    07:50:08 thread -- scripts/common.sh@337 -- # read -ra ver2
00:05:52.233    07:50:08 thread -- scripts/common.sh@338 -- # local 'op=<'
00:05:52.233    07:50:08 thread -- scripts/common.sh@340 -- # ver1_l=2
00:05:52.233    07:50:08 thread -- scripts/common.sh@341 -- # ver2_l=1
00:05:52.233    07:50:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:52.233    07:50:08 thread -- scripts/common.sh@344 -- # case "$op" in
00:05:52.233    07:50:08 thread -- scripts/common.sh@345 -- # : 1
00:05:52.233    07:50:08 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:52.233    07:50:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:52.233     07:50:08 thread -- scripts/common.sh@365 -- # decimal 1
00:05:52.233     07:50:08 thread -- scripts/common.sh@353 -- # local d=1
00:05:52.233     07:50:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:52.233     07:50:08 thread -- scripts/common.sh@355 -- # echo 1
00:05:52.233    07:50:08 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:05:52.233     07:50:08 thread -- scripts/common.sh@366 -- # decimal 2
00:05:52.233     07:50:08 thread -- scripts/common.sh@353 -- # local d=2
00:05:52.233     07:50:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:52.233     07:50:08 thread -- scripts/common.sh@355 -- # echo 2
00:05:52.233    07:50:08 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:05:52.233    07:50:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:52.233    07:50:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:52.233    07:50:08 thread -- scripts/common.sh@368 -- # return 0
00:05:52.233    07:50:08 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:52.233    07:50:08 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:52.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.233  		--rc genhtml_branch_coverage=1
00:05:52.233  		--rc genhtml_function_coverage=1
00:05:52.233  		--rc genhtml_legend=1
00:05:52.233  		--rc geninfo_all_blocks=1
00:05:52.233  		--rc geninfo_unexecuted_blocks=1
00:05:52.233  		
00:05:52.233  		'
00:05:52.233    07:50:08 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:52.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.233  		--rc genhtml_branch_coverage=1
00:05:52.233  		--rc genhtml_function_coverage=1
00:05:52.233  		--rc genhtml_legend=1
00:05:52.233  		--rc geninfo_all_blocks=1
00:05:52.233  		--rc geninfo_unexecuted_blocks=1
00:05:52.233  		
00:05:52.233  		'
00:05:52.233    07:50:08 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:52.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.233  		--rc genhtml_branch_coverage=1
00:05:52.233  		--rc genhtml_function_coverage=1
00:05:52.233  		--rc genhtml_legend=1
00:05:52.233  		--rc geninfo_all_blocks=1
00:05:52.233  		--rc geninfo_unexecuted_blocks=1
00:05:52.233  		
00:05:52.233  		'
00:05:52.233    07:50:08 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:52.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.233  		--rc genhtml_branch_coverage=1
00:05:52.233  		--rc genhtml_function_coverage=1
00:05:52.233  		--rc genhtml_legend=1
00:05:52.233  		--rc geninfo_all_blocks=1
00:05:52.233  		--rc geninfo_unexecuted_blocks=1
00:05:52.233  		
00:05:52.233  		'
00:05:52.233   07:50:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:52.233   07:50:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:52.234   07:50:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:52.234   07:50:08 thread -- common/autotest_common.sh@10 -- # set +x
00:05:52.234  ************************************
00:05:52.234  START TEST thread_poller_perf
00:05:52.234  ************************************
00:05:52.234   07:50:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:52.234  [2024-11-20 07:50:08.073425] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:52.234  [2024-11-20 07:50:08.074395] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58695 ]
00:05:52.234  [2024-11-20 07:50:08.211102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:52.492  [2024-11-20 07:50:08.279959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:52.492  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:05:53.429  
[2024-11-20T07:50:09.469Z]  ======================================
00:05:53.429  
[2024-11-20T07:50:09.469Z]  busy:2203424196 (cyc)
00:05:53.429  
[2024-11-20T07:50:09.469Z]  total_run_count: 1591000
00:05:53.429  
[2024-11-20T07:50:09.469Z]  tsc_hz: 2200000000 (cyc)
00:05:53.429  
[2024-11-20T07:50:09.469Z]  ======================================
00:05:53.429  
[2024-11-20T07:50:09.469Z]  poller_cost: 1384 (cyc), 629 (nsec)
00:05:53.429  
00:05:53.429  real	0m1.278s
00:05:53.429  ************************************
00:05:53.429  END TEST thread_poller_perf
00:05:53.429  ************************************
00:05:53.429  user	0m1.123s
00:05:53.429  sys	0m0.044s
00:05:53.429   07:50:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:53.429   07:50:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:53.429   07:50:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:53.429   07:50:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:53.429   07:50:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:53.429   07:50:09 thread -- common/autotest_common.sh@10 -- # set +x
00:05:53.429  ************************************
00:05:53.429  START TEST thread_poller_perf
00:05:53.429  ************************************
00:05:53.429   07:50:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:53.429  [2024-11-20 07:50:09.410372] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:53.429  [2024-11-20 07:50:09.410485] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58726 ]
00:05:53.688  [2024-11-20 07:50:09.544849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:53.688  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:05:53.688  [2024-11-20 07:50:09.612419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:55.065  
[2024-11-20T07:50:11.105Z]  ======================================
00:05:55.065  
[2024-11-20T07:50:11.105Z]  busy:2201339806 (cyc)
00:05:55.065  
[2024-11-20T07:50:11.105Z]  total_run_count: 14185000
00:05:55.065  
[2024-11-20T07:50:11.105Z]  tsc_hz: 2200000000 (cyc)
00:05:55.065  
[2024-11-20T07:50:11.105Z]  ======================================
00:05:55.065  
[2024-11-20T07:50:11.105Z]  poller_cost: 155 (cyc), 70 (nsec)
00:05:55.065  
00:05:55.065  real	0m1.273s
00:05:55.065  user	0m1.120s
00:05:55.065  sys	0m0.044s
00:05:55.065   07:50:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:55.065  ************************************
00:05:55.065  END TEST thread_poller_perf
00:05:55.065  ************************************
00:05:55.065   07:50:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:55.065   07:50:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:05:55.065  ************************************
00:05:55.065  END TEST thread
00:05:55.065  ************************************
00:05:55.065  
00:05:55.065  real	0m2.845s
00:05:55.065  user	0m2.380s
00:05:55.065  sys	0m0.241s
00:05:55.065   07:50:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:55.066   07:50:10 thread -- common/autotest_common.sh@10 -- # set +x
00:05:55.066   07:50:10  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:05:55.066   07:50:10  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:05:55.066   07:50:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:55.066   07:50:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:55.066   07:50:10  -- common/autotest_common.sh@10 -- # set +x
00:05:55.066  ************************************
00:05:55.066  START TEST app_cmdline
00:05:55.066  ************************************
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:05:55.066  * Looking for test storage...
00:05:55.066  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:55.066     07:50:10 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:05:55.066     07:50:10 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@345 -- # : 1
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:55.066     07:50:10 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:05:55.066  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:55.066    07:50:10 app_cmdline -- scripts/common.sh@368 -- # return 0
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:55.066  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:55.066  		--rc genhtml_branch_coverage=1
00:05:55.066  		--rc genhtml_function_coverage=1
00:05:55.066  		--rc genhtml_legend=1
00:05:55.066  		--rc geninfo_all_blocks=1
00:05:55.066  		--rc geninfo_unexecuted_blocks=1
00:05:55.066  		
00:05:55.066  		'
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:55.066  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:55.066  		--rc genhtml_branch_coverage=1
00:05:55.066  		--rc genhtml_function_coverage=1
00:05:55.066  		--rc genhtml_legend=1
00:05:55.066  		--rc geninfo_all_blocks=1
00:05:55.066  		--rc geninfo_unexecuted_blocks=1
00:05:55.066  		
00:05:55.066  		'
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:55.066  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:55.066  		--rc genhtml_branch_coverage=1
00:05:55.066  		--rc genhtml_function_coverage=1
00:05:55.066  		--rc genhtml_legend=1
00:05:55.066  		--rc geninfo_all_blocks=1
00:05:55.066  		--rc geninfo_unexecuted_blocks=1
00:05:55.066  		
00:05:55.066  		'
00:05:55.066    07:50:10 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:55.066  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:55.066  		--rc genhtml_branch_coverage=1
00:05:55.066  		--rc genhtml_function_coverage=1
00:05:55.066  		--rc genhtml_legend=1
00:05:55.066  		--rc geninfo_all_blocks=1
00:05:55.066  		--rc geninfo_unexecuted_blocks=1
00:05:55.066  		
00:05:55.066  		'
00:05:55.066   07:50:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:05:55.066   07:50:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58808
00:05:55.066   07:50:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58808
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58808 ']'
00:05:55.066   07:50:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:55.066   07:50:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:55.066  [2024-11-20 07:50:11.014061] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:55.066  [2024-11-20 07:50:11.014935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58808 ]
00:05:55.326  [2024-11-20 07:50:11.146382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:55.326  [2024-11-20 07:50:11.219407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:55.585   07:50:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:55.585   07:50:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:05:55.585   07:50:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:05:55.844  {
00:05:55.844    "version": "SPDK v25.01-pre git sha1 1c7c7c64f",
00:05:55.844    "fields": {
00:05:55.844      "major": 25,
00:05:55.844      "minor": 1,
00:05:55.844      "patch": 0,
00:05:55.844      "suffix": "-pre",
00:05:55.844      "commit": "1c7c7c64f"
00:05:55.844    }
00:05:55.844  }
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:05:55.844    07:50:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:05:55.844    07:50:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:55.844    07:50:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:55.844    07:50:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:05:55.844    07:50:11 app_cmdline -- app/cmdline.sh@26 -- # sort
00:05:55.844    07:50:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:05:55.844   07:50:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:55.844    07:50:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:55.844    07:50:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:05:55.844   07:50:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:56.412  request:
00:05:56.412  {
00:05:56.412    "method": "env_dpdk_get_mem_stats",
00:05:56.412    "req_id": 1
00:05:56.412  }
00:05:56.412  Got JSON-RPC error response
00:05:56.412  response:
00:05:56.412  {
00:05:56.412    "code": -32601,
00:05:56.412    "message": "Method not found"
00:05:56.412  }
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:56.412   07:50:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58808
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58808 ']'
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58808
00:05:56.412    07:50:12 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:56.412    07:50:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58808
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:56.412  killing process with pid 58808
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58808'
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 58808
00:05:56.412   07:50:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 58808
00:05:56.704  ************************************
00:05:56.704  END TEST app_cmdline
00:05:56.704  ************************************
00:05:56.704  
00:05:56.704  real	0m1.836s
00:05:56.704  user	0m2.273s
00:05:56.704  sys	0m0.479s
00:05:56.704   07:50:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:56.704   07:50:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:56.704   07:50:12  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:05:56.704   07:50:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:56.704   07:50:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:56.704   07:50:12  -- common/autotest_common.sh@10 -- # set +x
00:05:56.704  ************************************
00:05:56.704  START TEST version
00:05:56.704  ************************************
00:05:56.704   07:50:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:05:56.704  * Looking for test storage...
00:05:56.704  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:05:56.704    07:50:12 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:56.704     07:50:12 version -- common/autotest_common.sh@1693 -- # lcov --version
00:05:56.704     07:50:12 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:56.963    07:50:12 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:56.963    07:50:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:56.963    07:50:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:56.963    07:50:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:56.963    07:50:12 version -- scripts/common.sh@336 -- # IFS=.-:
00:05:56.963    07:50:12 version -- scripts/common.sh@336 -- # read -ra ver1
00:05:56.963    07:50:12 version -- scripts/common.sh@337 -- # IFS=.-:
00:05:56.963    07:50:12 version -- scripts/common.sh@337 -- # read -ra ver2
00:05:56.963    07:50:12 version -- scripts/common.sh@338 -- # local 'op=<'
00:05:56.963    07:50:12 version -- scripts/common.sh@340 -- # ver1_l=2
00:05:56.963    07:50:12 version -- scripts/common.sh@341 -- # ver2_l=1
00:05:56.963    07:50:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:56.963    07:50:12 version -- scripts/common.sh@344 -- # case "$op" in
00:05:56.963    07:50:12 version -- scripts/common.sh@345 -- # : 1
00:05:56.963    07:50:12 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:56.963    07:50:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:56.963     07:50:12 version -- scripts/common.sh@365 -- # decimal 1
00:05:56.963     07:50:12 version -- scripts/common.sh@353 -- # local d=1
00:05:56.963     07:50:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:56.963     07:50:12 version -- scripts/common.sh@355 -- # echo 1
00:05:56.963    07:50:12 version -- scripts/common.sh@365 -- # ver1[v]=1
00:05:56.963     07:50:12 version -- scripts/common.sh@366 -- # decimal 2
00:05:56.963     07:50:12 version -- scripts/common.sh@353 -- # local d=2
00:05:56.963     07:50:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:56.963     07:50:12 version -- scripts/common.sh@355 -- # echo 2
00:05:56.963    07:50:12 version -- scripts/common.sh@366 -- # ver2[v]=2
00:05:56.963    07:50:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:56.963    07:50:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:56.963    07:50:12 version -- scripts/common.sh@368 -- # return 0
00:05:56.963    07:50:12 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:56.963    07:50:12 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:56.963  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.963  		--rc genhtml_branch_coverage=1
00:05:56.963  		--rc genhtml_function_coverage=1
00:05:56.963  		--rc genhtml_legend=1
00:05:56.963  		--rc geninfo_all_blocks=1
00:05:56.963  		--rc geninfo_unexecuted_blocks=1
00:05:56.963  		
00:05:56.963  		'
00:05:56.963    07:50:12 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:56.963  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.963  		--rc genhtml_branch_coverage=1
00:05:56.963  		--rc genhtml_function_coverage=1
00:05:56.963  		--rc genhtml_legend=1
00:05:56.963  		--rc geninfo_all_blocks=1
00:05:56.963  		--rc geninfo_unexecuted_blocks=1
00:05:56.963  		
00:05:56.963  		'
00:05:56.963    07:50:12 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:56.963  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.963  		--rc genhtml_branch_coverage=1
00:05:56.963  		--rc genhtml_function_coverage=1
00:05:56.963  		--rc genhtml_legend=1
00:05:56.963  		--rc geninfo_all_blocks=1
00:05:56.963  		--rc geninfo_unexecuted_blocks=1
00:05:56.963  		
00:05:56.963  		'
00:05:56.964    07:50:12 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:56.964  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.964  		--rc genhtml_branch_coverage=1
00:05:56.964  		--rc genhtml_function_coverage=1
00:05:56.964  		--rc genhtml_legend=1
00:05:56.964  		--rc geninfo_all_blocks=1
00:05:56.964  		--rc geninfo_unexecuted_blocks=1
00:05:56.964  		
00:05:56.964  		'
00:05:56.964    07:50:12 version -- app/version.sh@17 -- # get_header_version major
00:05:56.964    07:50:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # cut -f2
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # tr -d '"'
00:05:56.964   07:50:12 version -- app/version.sh@17 -- # major=25
00:05:56.964    07:50:12 version -- app/version.sh@18 -- # get_header_version minor
00:05:56.964    07:50:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # cut -f2
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # tr -d '"'
00:05:56.964   07:50:12 version -- app/version.sh@18 -- # minor=1
00:05:56.964    07:50:12 version -- app/version.sh@19 -- # get_header_version patch
00:05:56.964    07:50:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # cut -f2
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # tr -d '"'
00:05:56.964   07:50:12 version -- app/version.sh@19 -- # patch=0
00:05:56.964    07:50:12 version -- app/version.sh@20 -- # get_header_version suffix
00:05:56.964    07:50:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # cut -f2
00:05:56.964    07:50:12 version -- app/version.sh@14 -- # tr -d '"'
00:05:56.964   07:50:12 version -- app/version.sh@20 -- # suffix=-pre
00:05:56.964   07:50:12 version -- app/version.sh@22 -- # version=25.1
00:05:56.964   07:50:12 version -- app/version.sh@25 -- # (( patch != 0 ))
00:05:56.964   07:50:12 version -- app/version.sh@28 -- # version=25.1rc0
00:05:56.964   07:50:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:56.964    07:50:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:05:56.964   07:50:12 version -- app/version.sh@30 -- # py_version=25.1rc0
00:05:56.964   07:50:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:05:56.964  
00:05:56.964  real	0m0.270s
00:05:56.964  user	0m0.180s
00:05:56.964  sys	0m0.125s
00:05:56.964  ************************************
00:05:56.964  END TEST version
00:05:56.964  ************************************
00:05:56.964   07:50:12 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:56.964   07:50:12 version -- common/autotest_common.sh@10 -- # set +x
00:05:56.964   07:50:12  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:05:56.964   07:50:12  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:05:56.964    07:50:12  -- spdk/autotest.sh@194 -- # uname -s
00:05:56.964   07:50:12  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:05:56.964   07:50:12  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:56.964   07:50:12  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:56.964   07:50:12  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:05:56.964   07:50:12  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:05:56.964   07:50:12  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:56.964   07:50:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:56.964   07:50:12  -- common/autotest_common.sh@10 -- # set +x
00:05:56.964  ************************************
00:05:56.964  START TEST blockdev_nvme
00:05:56.964  ************************************
00:05:56.964   07:50:12 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:05:57.223  * Looking for test storage...
00:05:57.223  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:05:57.223    07:50:13 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:57.223     07:50:13 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:05:57.223     07:50:13 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:57.223    07:50:13 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:57.223    07:50:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:57.223     07:50:13 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:05:57.223     07:50:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:05:57.223     07:50:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:57.224     07:50:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:05:57.224    07:50:13 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:05:57.224     07:50:13 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:05:57.224     07:50:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:05:57.224     07:50:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:57.224     07:50:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:05:57.224    07:50:13 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:05:57.224    07:50:13 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:57.224    07:50:13 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:57.224    07:50:13 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:05:57.224    07:50:13 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:57.224    07:50:13 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:57.224  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:57.224  		--rc genhtml_branch_coverage=1
00:05:57.224  		--rc genhtml_function_coverage=1
00:05:57.224  		--rc genhtml_legend=1
00:05:57.224  		--rc geninfo_all_blocks=1
00:05:57.224  		--rc geninfo_unexecuted_blocks=1
00:05:57.224  		
00:05:57.224  		'
00:05:57.224    07:50:13 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:57.224  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:57.224  		--rc genhtml_branch_coverage=1
00:05:57.224  		--rc genhtml_function_coverage=1
00:05:57.224  		--rc genhtml_legend=1
00:05:57.224  		--rc geninfo_all_blocks=1
00:05:57.224  		--rc geninfo_unexecuted_blocks=1
00:05:57.224  		
00:05:57.224  		'
00:05:57.224    07:50:13 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:57.224  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:57.224  		--rc genhtml_branch_coverage=1
00:05:57.224  		--rc genhtml_function_coverage=1
00:05:57.224  		--rc genhtml_legend=1
00:05:57.224  		--rc geninfo_all_blocks=1
00:05:57.224  		--rc geninfo_unexecuted_blocks=1
00:05:57.224  		
00:05:57.224  		'
00:05:57.224    07:50:13 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:57.224  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:57.224  		--rc genhtml_branch_coverage=1
00:05:57.224  		--rc genhtml_function_coverage=1
00:05:57.224  		--rc genhtml_legend=1
00:05:57.224  		--rc geninfo_all_blocks=1
00:05:57.224  		--rc geninfo_unexecuted_blocks=1
00:05:57.224  		
00:05:57.224  		'
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:05:57.224    07:50:13 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:05:57.224    07:50:13 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device=
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek=
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx=
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]]
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]]
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=58971
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:05:57.224   07:50:13 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 58971
00:05:57.224   07:50:13 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 58971 ']'
00:05:57.224   07:50:13 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:57.224   07:50:13 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:57.224   07:50:13 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:57.224  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:57.224   07:50:13 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:57.224   07:50:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:57.224  [2024-11-20 07:50:13.229713] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:57.224  [2024-11-20 07:50:13.230037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ]
00:05:57.483  [2024-11-20 07:50:13.366411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:57.483  [2024-11-20 07:50:13.443137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:57.742   07:50:13 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:57.742   07:50:13 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:05:57.742   07:50:13 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:05:57.742   07:50:13 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf
00:05:57.742   07:50:13 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:05:57.742   07:50:13 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:05:57.742    07:50:13 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:58.064   07:50:13 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } } ] }'\'''
00:05:58.064   07:50:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:58.064   07:50:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.064   07:50:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:58.064   07:50:13 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:05:58.064   07:50:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:58.064   07:50:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.064   07:50:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:58.065   07:50:13 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat
00:05:58.065    07:50:13 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:05:58.065    07:50:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:58.065    07:50:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.065    07:50:13 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:58.065    07:50:13 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:05:58.065    07:50:13 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:58.065    07:50:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:58.065    07:50:14 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:58.065   07:50:14 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:05:58.065    07:50:14 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:58.065    07:50:14 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.065    07:50:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:58.340   07:50:14 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:05:58.340    07:50:14 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "a052e328-3a80-4a41-9c91-f19ba65d89b1"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "a052e328-3a80-4a41-9c91-f19ba65d89b1",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1",' '  "aliases": [' '    "e1f5f4ca-a30b-4d06-8819-c9be842ce282"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "e1f5f4ca-a30b-4d06-8819-c9be842ce282",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:11.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:11.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12341",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12341",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:05:58.340    07:50:14 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name
00:05:58.340   07:50:14 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:05:58.340   07:50:14 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1
00:05:58.340   07:50:14 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:05:58.340   07:50:14 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 58971
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 58971 ']'
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 58971
00:05:58.340    07:50:14 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:58.340    07:50:14 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58971
00:05:58.340  killing process with pid 58971
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58971'
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 58971
00:05:58.340   07:50:14 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 58971
00:05:58.599   07:50:14 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:05:58.599   07:50:14 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:05:58.599   07:50:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:05:58.599   07:50:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:58.599   07:50:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:58.599  ************************************
00:05:58.599  START TEST bdev_hello_world
00:05:58.599  ************************************
00:05:58.599   07:50:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:05:58.599  [2024-11-20 07:50:14.616420] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:58.599  [2024-11-20 07:50:14.617026] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ]
00:05:58.859  [2024-11-20 07:50:14.752138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:58.859  [2024-11-20 07:50:14.820770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:59.118  [2024-11-20 07:50:15.103717] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:05:59.118  [2024-11-20 07:50:15.103790] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:05:59.118  [2024-11-20 07:50:15.103810] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:05:59.118  [2024-11-20 07:50:15.105795] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:05:59.118  [2024-11-20 07:50:15.106319] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:05:59.118  [2024-11-20 07:50:15.106351] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:05:59.118  [2024-11-20 07:50:15.106525] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:05:59.118  
00:05:59.118  [2024-11-20 07:50:15.106546] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:05:59.377  
00:05:59.377  real	0m0.726s
00:05:59.377  user	0m0.453s
00:05:59.377  sys	0m0.166s
00:05:59.377   07:50:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:59.377  ************************************
00:05:59.377  END TEST bdev_hello_world
00:05:59.377  ************************************
00:05:59.377   07:50:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:05:59.377   07:50:15 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:05:59.377   07:50:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:59.377   07:50:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:59.377   07:50:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:59.377  ************************************
00:05:59.377  START TEST bdev_bounds
00:05:59.377  ************************************
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:05:59.377  Process bdevio pid: 59055
00:05:59.377  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59055
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59055'
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59055
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59055 ']'
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:59.377   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:05:59.377  [2024-11-20 07:50:15.399911] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:05:59.377  [2024-11-20 07:50:15.400240] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ]
00:05:59.636  [2024-11-20 07:50:15.531503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:59.636  [2024-11-20 07:50:15.600915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:59.636  [2024-11-20 07:50:15.601059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:59.636  [2024-11-20 07:50:15.601063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:59.895   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:59.895   07:50:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:05:59.895   07:50:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:06:00.154  I/O targets:
00:06:00.154    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:06:00.154    Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:06:00.154  
00:06:00.154  
00:06:00.154       CUnit - A unit testing framework for C - Version 2.1-3
00:06:00.154       http://cunit.sourceforge.net/
00:06:00.154  
00:06:00.154  
00:06:00.154  Suite: bdevio tests on: Nvme1n1
00:06:00.154    Test: blockdev write read block ...passed
00:06:00.154    Test: blockdev write zeroes read block ...passed
00:06:00.154    Test: blockdev write zeroes read no split ...passed
00:06:00.154    Test: blockdev write zeroes read split ...passed
00:06:00.154    Test: blockdev write zeroes read split partial ...passed
00:06:00.154    Test: blockdev reset ...[2024-11-20 07:50:16.067785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:06:00.154  passed
00:06:00.154    Test: blockdev write read 8 blocks ...[2024-11-20 07:50:16.069485] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:06:00.154  passed
00:06:00.154    Test: blockdev write read size > 128k ...passed
00:06:00.154    Test: blockdev write read invalid size ...passed
00:06:00.154    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:00.154    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:00.154    Test: blockdev write read max offset ...passed
00:06:00.154    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:00.154    Test: blockdev writev readv 8 blocks ...passed
00:06:00.154    Test: blockdev writev readv 30 x 1block ...passed
00:06:00.154    Test: blockdev writev readv block ...passed
00:06:00.154    Test: blockdev writev readv size > 128k ...passed
00:06:00.154    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:00.154    Test: blockdev comparev and writev ...[2024-11-20 07:50:16.073844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c300a000 len:0x1000
00:06:00.154  [2024-11-20 07:50:16.073887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:00.154  passed
00:06:00.154    Test: blockdev nvme passthru rw ...passed
00:06:00.154    Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:50:16.074437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed
00:06:00.154    Test: blockdev nvme admin passthru ...RP2 0x0
00:06:00.154  [2024-11-20 07:50:16.074612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:00.154  passed
00:06:00.154    Test: blockdev copy ...passed
00:06:00.154  Suite: bdevio tests on: Nvme0n1
00:06:00.154    Test: blockdev write read block ...passed
00:06:00.154    Test: blockdev write zeroes read block ...passed
00:06:00.154    Test: blockdev write zeroes read no split ...passed
00:06:00.154    Test: blockdev write zeroes read split ...passed
00:06:00.154    Test: blockdev write zeroes read split partial ...passed
00:06:00.154    Test: blockdev reset ...[2024-11-20 07:50:16.091027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:06:00.154  passed
00:06:00.154    Test: blockdev write read 8 blocks ...[2024-11-20 07:50:16.092590] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:06:00.154  passed
00:06:00.154    Test: blockdev write read size > 128k ...passed
00:06:00.154    Test: blockdev write read invalid size ...passed
00:06:00.154    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:00.154    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:00.154    Test: blockdev write read max offset ...passed
00:06:00.154    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:00.154    Test: blockdev writev readv 8 blocks ...passed
00:06:00.154    Test: blockdev writev readv 30 x 1block ...passed
00:06:00.154    Test: blockdev writev readv block ...passed
00:06:00.154    Test: blockdev writev readv size > 128k ...passed
00:06:00.154    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:00.154    Test: blockdev comparev and writev ...[2024-11-20 07:50:16.097293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed
00:06:00.154    Test: blockdev nvme passthru rw ...passed
00:06:00.154    Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2ad206000 len:0x1000
00:06:00.154  [2024-11-20 07:50:16.097456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:00.154  passed
00:06:00.154    Test: blockdev nvme admin passthru ...[2024-11-20 07:50:16.098050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:06:00.154  [2024-11-20 07:50:16.098073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:00.154  passed
00:06:00.154    Test: blockdev copy ...passed
00:06:00.154  
00:06:00.154  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:00.154                suites      2      2    n/a      0        0
00:06:00.154                 tests     46     46     46      0        0
00:06:00.154               asserts    304    304    304      0      n/a
00:06:00.154  
00:06:00.154  Elapsed time =    0.106 seconds
00:06:00.154  0
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59055
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59055 ']'
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59055
00:06:00.154    07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:00.154    07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59055
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59055'
00:06:00.154  killing process with pid 59055
00:06:00.154   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59055
00:06:00.155   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59055
00:06:00.414   07:50:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:06:00.414  
00:06:00.414  real	0m0.955s
00:06:00.414  user	0m2.380s
00:06:00.414  sys	0m0.229s
00:06:00.414   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:00.414   07:50:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:06:00.414  ************************************
00:06:00.414  END TEST bdev_bounds
00:06:00.414  ************************************
00:06:00.414   07:50:16 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1' ''
00:06:00.414   07:50:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:00.414   07:50:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:00.414   07:50:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:00.414  ************************************
00:06:00.414  START TEST bdev_nbd
00:06:00.414  ************************************
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1' ''
00:06:00.414    07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1')
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1')
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59095
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59095 /var/tmp/spdk-nbd.sock
00:06:00.414  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59095 ']'
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:00.414   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:06:00.414  [2024-11-20 07:50:16.414537] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:00.414  [2024-11-20 07:50:16.414653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:06:00.673  [2024-11-20 07:50:16.542307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:00.673  [2024-11-20 07:50:16.608350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1'
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1')
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1'
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1')
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:06:00.932   07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:06:00.932    07:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:06:01.500    07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:01.500  1+0 records in
00:06:01.500  1+0 records out
00:06:01.500  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503622 s, 8.1 MB/s
00:06:01.500    07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:01.500   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:06:01.500    07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1
00:06:01.759   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:06:01.759    07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:06:01.759   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:06:01.759   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:01.760  1+0 records in
00:06:01.760  1+0 records out
00:06:01.760  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494667 s, 8.3 MB/s
00:06:01.760    07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:01.760   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:06:01.760    07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:06:02.019    {
00:06:02.019      "nbd_device": "/dev/nbd0",
00:06:02.019      "bdev_name": "Nvme0n1"
00:06:02.019    },
00:06:02.019    {
00:06:02.019      "nbd_device": "/dev/nbd1",
00:06:02.019      "bdev_name": "Nvme1n1"
00:06:02.019    }
00:06:02.019  ]'
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:06:02.019    07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:06:02.019    {
00:06:02.019      "nbd_device": "/dev/nbd0",
00:06:02.019      "bdev_name": "Nvme0n1"
00:06:02.019    },
00:06:02.019    {
00:06:02.019      "nbd_device": "/dev/nbd1",
00:06:02.019      "bdev_name": "Nvme1n1"
00:06:02.019    }
00:06:02.019  ]'
00:06:02.019    07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:02.019   07:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:02.278    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:02.278   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:02.537    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:02.537   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:02.537    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:02.537    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:02.537     07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:03.105    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:03.105     07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:03.105     07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:03.105    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:03.105     07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:03.105     07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:06:03.105     07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:06:03.105    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:06:03.105    07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1' '/dev/nbd0 /dev/nbd1'
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1')
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1' '/dev/nbd0 /dev/nbd1'
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1')
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:03.105   07:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:06:03.380  /dev/nbd0
00:06:03.380    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:03.380  1+0 records in
00:06:03.380  1+0 records out
00:06:03.380  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516629 s, 7.9 MB/s
00:06:03.380    07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:03.380   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:03.381   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:03.381   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1
00:06:03.645  /dev/nbd1
00:06:03.645    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:03.645  1+0 records in
00:06:03.645  1+0 records out
00:06:03.645  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051326 s, 8.0 MB/s
00:06:03.645    07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:03.645   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:03.645    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:03.645    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:03.645     07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:03.903    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:03.903    {
00:06:03.903      "nbd_device": "/dev/nbd0",
00:06:03.903      "bdev_name": "Nvme0n1"
00:06:03.903    },
00:06:03.903    {
00:06:03.903      "nbd_device": "/dev/nbd1",
00:06:03.904      "bdev_name": "Nvme1n1"
00:06:03.904    }
00:06:03.904  ]'
00:06:03.904     07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:06:03.904    {
00:06:03.904      "nbd_device": "/dev/nbd0",
00:06:03.904      "bdev_name": "Nvme0n1"
00:06:03.904    },
00:06:03.904    {
00:06:03.904      "nbd_device": "/dev/nbd1",
00:06:03.904      "bdev_name": "Nvme1n1"
00:06:03.904    }
00:06:03.904  ]'
00:06:03.904     07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:03.904    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:03.904  /dev/nbd1'
00:06:03.904     07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:03.904  /dev/nbd1'
00:06:03.904     07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:03.904    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2
00:06:03.904    07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:06:03.904  256+0 records in
00:06:03.904  256+0 records out
00:06:03.904  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00749246 s, 140 MB/s
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:03.904   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:04.162  256+0 records in
00:06:04.162  256+0 records out
00:06:04.162  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0574433 s, 18.3 MB/s
00:06:04.162   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:04.162   07:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:04.162  256+0 records in
00:06:04.163  256+0 records out
00:06:04.163  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0542158 s, 19.3 MB/s
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:04.163   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:04.422    07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:04.422   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:04.680    07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:04.680   07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:04.680    07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:04.680    07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:04.680     07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:05.249    07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:05.249     07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:05.249     07:50:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:05.249    07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:05.249     07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:06:05.249     07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:05.249     07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:06:05.249    07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:06:05.249    07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:06:05.249   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:06:05.508  malloc_lvol_verify
00:06:05.508   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:06:05.768  d05a2b6b-3b0d-4a62-97da-8c396e163e38
00:06:05.768   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:06:06.027  5564012d-2f8d-4a01-be50-8fb4fb570ca8
00:06:06.027   07:50:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:06:06.286  /dev/nbd0
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:06:06.286  mke2fs 1.47.0 (5-Feb-2023)
00:06:06.286  Discarding device blocks:    0/4096         done                            
00:06:06.286  Creating filesystem with 4096 1k blocks and 1024 inodes
00:06:06.286  
00:06:06.286  Allocating group tables: 0/1   done                            
00:06:06.286  Writing inode tables: 0/1   done                            
00:06:06.286  Creating journal (1024 blocks): done
00:06:06.286  Writing superblocks and filesystem accounting information: 0/1   done
00:06:06.286  
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:06.286   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:06.913    07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59095
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59095 ']'
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59095
00:06:06.913    07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:06.913    07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59095
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59095'
00:06:06.913  killing process with pid 59095
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59095
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59095
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:06:06.913  
00:06:06.913  real	0m6.493s
00:06:06.913  user	0m9.990s
00:06:06.913  sys	0m2.084s
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:06.913   07:50:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:06:06.913  ************************************
00:06:06.913  END TEST bdev_nbd
00:06:06.913  ************************************
00:06:06.913   07:50:22 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:06:06.913   07:50:22 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']'
00:06:06.913  skipping fio tests on NVMe due to multi-ns failures.
00:06:06.913   07:50:22 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:06:06.913   07:50:22 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:06:06.914   07:50:22 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:06:06.914   07:50:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:06:06.914   07:50:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:06.914   07:50:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:06.914  ************************************
00:06:06.914  START TEST bdev_verify
00:06:06.914  ************************************
00:06:06.914   07:50:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:06:07.173  [2024-11-20 07:50:22.956465] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:07.173  [2024-11-20 07:50:22.956593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ]
00:06:07.173  [2024-11-20 07:50:23.085144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:07.173  [2024-11-20 07:50:23.151835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:07.173  [2024-11-20 07:50:23.151846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:07.431  Running I/O for 5 seconds...
00:06:09.741      30784.00 IOPS,   120.25 MiB/s
[2024-11-20T07:50:26.714Z]     31168.00 IOPS,   121.75 MiB/s
[2024-11-20T07:50:27.649Z]     30997.33 IOPS,   121.08 MiB/s
[2024-11-20T07:50:28.585Z]     31360.00 IOPS,   122.50 MiB/s
[2024-11-20T07:50:28.585Z]     31500.80 IOPS,   123.05 MiB/s
00:06:12.545                                                                                                  Latency(us)
00:06:12.545  
[2024-11-20T07:50:28.585Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:12.545  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:12.545  	 Verification LBA range: start 0x0 length 0xa0000
00:06:12.545  	 Nvme0n1             :       5.02    7752.62      30.28       0.00     0.00   16468.90    2606.55   19779.96
00:06:12.545  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:12.545  	 Verification LBA range: start 0xa0000 length 0xa0000
00:06:12.545  	 Nvme0n1             :       5.02    8007.71      31.28       0.00     0.00   15926.49    2576.76   19541.64
00:06:12.545  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:12.545  	 Verification LBA range: start 0x0 length 0xa0000
00:06:12.545  	 Nvme1n1             :       5.02    7751.79      30.28       0.00     0.00   16453.48    2204.39   20614.05
00:06:12.545  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:12.545  	 Verification LBA range: start 0xa0000 length 0xa0000
00:06:12.545  	 Nvme1n1             :       5.01    8002.71      31.26       0.00     0.00   15953.12    2308.65   20614.05
00:06:12.545  
[2024-11-20T07:50:28.585Z]  ===================================================================================================================
00:06:12.545  
[2024-11-20T07:50:28.585Z]  Total                       :              31514.83     123.10       0.00     0.00   16196.48    2204.39   20614.05
00:06:13.480  
00:06:13.480  real	0m6.259s
00:06:13.480  user	0m11.809s
00:06:13.480  sys	0m0.163s
00:06:13.480   07:50:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:13.480   07:50:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:06:13.480  ************************************
00:06:13.480  END TEST bdev_verify
00:06:13.480  ************************************
00:06:13.480   07:50:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:06:13.480   07:50:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:06:13.480   07:50:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:13.480   07:50:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:13.480  ************************************
00:06:13.480  START TEST bdev_verify_big_io
00:06:13.480  ************************************
00:06:13.480   07:50:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:06:13.480  [2024-11-20 07:50:29.277408] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:13.480  [2024-11-20 07:50:29.277570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59383 ]
00:06:13.480  [2024-11-20 07:50:29.408491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:13.480  [2024-11-20 07:50:29.482378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:13.480  [2024-11-20 07:50:29.482390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:14.046  Running I/O for 5 seconds...
00:06:16.352       2498.00 IOPS,   156.12 MiB/s
[2024-11-20T07:50:32.989Z]      2609.00 IOPS,   163.06 MiB/s
[2024-11-20T07:50:34.365Z]      2795.33 IOPS,   174.71 MiB/s
[2024-11-20T07:50:34.934Z]      2824.50 IOPS,   176.53 MiB/s
00:06:18.894                                                                                                  Latency(us)
00:06:18.894  
[2024-11-20T07:50:34.934Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:18.894  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:18.894  	 Verification LBA range: start 0x0 length 0xa000
00:06:18.894  	 Nvme0n1             :       5.07     682.19      42.64       0.00     0.00  184921.28   11796.48  172538.41
00:06:18.894  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:18.894  	 Verification LBA range: start 0xa000 length 0xa000
00:06:18.894  	 Nvme0n1             :       5.12     749.94      46.87       0.00     0.00  168528.42    4259.84  197322.94
00:06:18.894  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:18.894  	 Verification LBA range: start 0x0 length 0xa000
00:06:18.894  	 Nvme1n1             :       5.12     699.35      43.71       0.00     0.00  177419.81     463.59  191603.43
00:06:18.894  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:18.894  	 Verification LBA range: start 0xa000 length 0xa000
00:06:18.894  	 Nvme1n1             :       5.12     742.94      46.43       0.00     0.00  167486.05    1660.74  227826.97
00:06:18.894  
[2024-11-20T07:50:34.934Z]  ===================================================================================================================
00:06:18.894  
[2024-11-20T07:50:34.934Z]  Total                       :               2874.42     179.65       0.00     0.00  174285.82     463.59  227826.97
00:06:19.828  
00:06:19.828  real	0m6.449s
00:06:19.828  user	0m12.221s
00:06:19.828  sys	0m0.194s
00:06:19.828   07:50:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:19.828  ************************************
00:06:19.828  END TEST bdev_verify_big_io
00:06:19.828  ************************************
00:06:19.828   07:50:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:06:19.828   07:50:35 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:19.828   07:50:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:19.828   07:50:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:19.828   07:50:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:19.828  ************************************
00:06:19.828  START TEST bdev_write_zeroes
00:06:19.828  ************************************
00:06:19.828   07:50:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:19.828  [2024-11-20 07:50:35.777448] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:19.828  [2024-11-20 07:50:35.777593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59474 ]
00:06:20.086  [2024-11-20 07:50:35.912425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:20.086  [2024-11-20 07:50:35.987505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:20.344  Running I/O for 1 seconds...
00:06:21.274      75007.00 IOPS,   293.00 MiB/s
00:06:21.274                                                                                                  Latency(us)
00:06:21.274  
[2024-11-20T07:50:37.314Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:21.274  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:21.274  	 Nvme0n1             :       1.01   37448.33     146.28       0.00     0.00    3414.83    1735.21   10128.29
00:06:21.274  Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:21.274  	 Nvme1n1             :       1.01   37395.03     146.07       0.00     0.00    3418.85    2159.71   11200.70
00:06:21.274  
[2024-11-20T07:50:37.314Z]  ===================================================================================================================
00:06:21.274  
[2024-11-20T07:50:37.314Z]  Total                       :              74843.36     292.36       0.00     0.00    3416.84    1735.21   11200.70
00:06:21.532  
00:06:21.532  real	0m1.762s
00:06:21.532  user	0m1.492s
00:06:21.532  sys	0m0.153s
00:06:21.532   07:50:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:21.532   07:50:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:06:21.532  ************************************
00:06:21.532  END TEST bdev_write_zeroes
00:06:21.532  ************************************
00:06:21.533   07:50:37 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:21.533   07:50:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:21.533   07:50:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:21.533   07:50:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:21.533  ************************************
00:06:21.533  START TEST bdev_json_nonenclosed
00:06:21.533  ************************************
00:06:21.533   07:50:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:21.789  [2024-11-20 07:50:37.595748] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:21.789  [2024-11-20 07:50:37.595865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59510 ]
00:06:21.789  [2024-11-20 07:50:37.730007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:21.789  [2024-11-20 07:50:37.800810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:21.789  [2024-11-20 07:50:37.800886] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:06:21.789  [2024-11-20 07:50:37.800904] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:21.789  [2024-11-20 07:50:37.800914] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:22.047  
00:06:22.047  real	0m0.313s
00:06:22.047  user	0m0.155s
00:06:22.047  sys	0m0.055s
00:06:22.047   07:50:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:22.047   07:50:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:06:22.047  ************************************
00:06:22.047  END TEST bdev_json_nonenclosed
00:06:22.047  ************************************
00:06:22.047   07:50:37 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:22.047   07:50:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:22.047   07:50:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:22.047   07:50:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:22.047  ************************************
00:06:22.047  START TEST bdev_json_nonarray
00:06:22.047  ************************************
00:06:22.047   07:50:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:22.047  [2024-11-20 07:50:37.965045] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:22.047  [2024-11-20 07:50:37.965189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ]
00:06:22.304  [2024-11-20 07:50:38.097964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:22.304  [2024-11-20 07:50:38.168775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:22.304  [2024-11-20 07:50:38.168867] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:06:22.304  [2024-11-20 07:50:38.168884] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:22.304  [2024-11-20 07:50:38.168894] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:22.304  
00:06:22.304  real	0m0.329s
00:06:22.304  user	0m0.162s
00:06:22.304  sys	0m0.063s
00:06:22.304   07:50:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:22.304   07:50:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:06:22.304  ************************************
00:06:22.304  END TEST bdev_json_nonarray
00:06:22.304  ************************************
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]]
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]]
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]]
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:06:22.304   07:50:38 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:06:22.304  
00:06:22.304  real	0m25.320s
00:06:22.304  user	0m40.365s
00:06:22.304  sys	0m3.905s
00:06:22.304   07:50:38 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:22.305  ************************************
00:06:22.305  END TEST blockdev_nvme
00:06:22.305   07:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:22.305  ************************************
00:06:22.563    07:50:38  -- spdk/autotest.sh@209 -- # uname -s
00:06:22.563   07:50:38  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:06:22.563   07:50:38  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:06:22.563   07:50:38  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:22.563   07:50:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:22.563   07:50:38  -- common/autotest_common.sh@10 -- # set +x
00:06:22.563  ************************************
00:06:22.563  START TEST blockdev_nvme_gpt
00:06:22.563  ************************************
00:06:22.563   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:06:22.563  * Looking for test storage...
00:06:22.563  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:22.563     07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version
00:06:22.563     07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:22.563     07:50:38 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:22.563    07:50:38 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:22.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:22.563  		--rc genhtml_branch_coverage=1
00:06:22.563  		--rc genhtml_function_coverage=1
00:06:22.563  		--rc genhtml_legend=1
00:06:22.563  		--rc geninfo_all_blocks=1
00:06:22.563  		--rc geninfo_unexecuted_blocks=1
00:06:22.563  		
00:06:22.563  		'
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:22.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:22.563  		--rc genhtml_branch_coverage=1
00:06:22.563  		--rc genhtml_function_coverage=1
00:06:22.563  		--rc genhtml_legend=1
00:06:22.563  		--rc geninfo_all_blocks=1
00:06:22.563  		--rc geninfo_unexecuted_blocks=1
00:06:22.563  		
00:06:22.563  		'
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:22.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:22.563  		--rc genhtml_branch_coverage=1
00:06:22.563  		--rc genhtml_function_coverage=1
00:06:22.563  		--rc genhtml_legend=1
00:06:22.563  		--rc geninfo_all_blocks=1
00:06:22.563  		--rc geninfo_unexecuted_blocks=1
00:06:22.563  		
00:06:22.563  		'
00:06:22.563    07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:22.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:22.563  		--rc genhtml_branch_coverage=1
00:06:22.563  		--rc genhtml_function_coverage=1
00:06:22.563  		--rc genhtml_legend=1
00:06:22.563  		--rc geninfo_all_blocks=1
00:06:22.563  		--rc geninfo_unexecuted_blocks=1
00:06:22.563  		
00:06:22.563  		'
00:06:22.563   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:06:22.563    07:50:38 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:06:22.563   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:06:22.564    07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device=
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek=
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx=
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]]
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]]
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59612
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:06:22.564   07:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 59612
00:06:22.564   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 59612 ']'
00:06:22.564   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:22.564   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:22.564   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:22.564  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:22.564   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:22.564   07:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:22.822  [2024-11-20 07:50:38.623655] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:22.822  [2024-11-20 07:50:38.623779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59612 ]
00:06:22.822  [2024-11-20 07:50:38.758971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:22.822  [2024-11-20 07:50:38.829919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:23.392   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:23.392   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:06:23.392   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:06:23.392   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf
00:06:23.392   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:23.651  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:23.651  Waiting for block devices as requested
00:06:23.651  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:06:23.651  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:06:23.912   07:50:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1')
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:06:23.912   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:06:23.913    07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:06:23.913  BYT;
00:06:23.913  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:06:23.913  BYT;
00:06:23.913  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:06:23.913    07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:06:23.913     07:50:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:06:23.913    07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:06:23.913     07:50:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:06:23.913    07:50:39 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:06:23.913   07:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:06:24.851  The operation has completed successfully.
00:06:24.851   07:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:06:26.226  The operation has completed successfully.
00:06:26.226   07:50:41 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:26.485  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:26.745  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:26.745  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:26.745   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:06:26.745   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:26.745   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:26.745  []
00:06:26.745   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:26.745   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:06:26.745   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:06:26.745   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:06:26.745    07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:26.745   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } } ] }'\'''
00:06:26.745   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:26.746   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.004   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.004   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:06:27.004   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.004   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.004   07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.004   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat
00:06:27.004    07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.004    07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.004    07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.004   07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:06:27.004    07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:06:27.004    07:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.004    07:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.004    07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.004   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:06:27.262    07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "0f819799-531c-4982-833d-1bee261413d5"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "0f819799-531c-4982-833d-1bee261413d5",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme1n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:06:27.262    07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name
00:06:27.262   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:06:27.262   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1
00:06:27.262   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:06:27.262   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 59612
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 59612 ']'
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 59612
00:06:27.262    07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:27.262    07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59612
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:27.262  killing process with pid 59612
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59612'
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 59612
00:06:27.262   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 59612
00:06:27.520   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:06:27.520   07:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:06:27.520   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:06:27.520   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:27.520   07:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:27.520  ************************************
00:06:27.520  START TEST bdev_hello_world
00:06:27.520  ************************************
00:06:27.520   07:50:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:06:27.778  [2024-11-20 07:50:43.558551] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:27.778  [2024-11-20 07:50:43.558661] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ]
00:06:27.778  [2024-11-20 07:50:43.684698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:27.778  [2024-11-20 07:50:43.751507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:28.036  [2024-11-20 07:50:44.034340] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:06:28.036  [2024-11-20 07:50:44.034397] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:06:28.036  [2024-11-20 07:50:44.034417] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:06:28.036  [2024-11-20 07:50:44.036308] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:06:28.036  [2024-11-20 07:50:44.036662] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:06:28.036  [2024-11-20 07:50:44.036690] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:06:28.036  [2024-11-20 07:50:44.036853] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:06:28.036  
00:06:28.036  [2024-11-20 07:50:44.036880] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:06:28.295  
00:06:28.295  real	0m0.711s
00:06:28.295  user	0m0.448s
00:06:28.295  sys	0m0.161s
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:06:28.295  ************************************
00:06:28.295  END TEST bdev_hello_world
00:06:28.295  ************************************
00:06:28.295   07:50:44 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:06:28.295   07:50:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:28.295   07:50:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:28.295   07:50:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:28.295  ************************************
00:06:28.295  START TEST bdev_bounds
00:06:28.295  ************************************
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60003
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:06:28.295  Process bdevio pid: 60003
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60003'
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60003
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60003 ']'
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:28.295  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:28.295   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:06:28.295  [2024-11-20 07:50:44.330427] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:28.295  [2024-11-20 07:50:44.330555] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60003 ]
00:06:28.554  [2024-11-20 07:50:44.464268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:28.554  [2024-11-20 07:50:44.532051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:28.554  [2024-11-20 07:50:44.532130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:28.554  [2024-11-20 07:50:44.532140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:28.812   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:28.812   07:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:06:28.812   07:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:06:29.070  I/O targets:
00:06:29.070    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:06:29.070    Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:06:29.070    Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:06:29.070  
00:06:29.070  
00:06:29.070       CUnit - A unit testing framework for C - Version 2.1-3
00:06:29.070       http://cunit.sourceforge.net/
00:06:29.070  
00:06:29.070  
00:06:29.070  Suite: bdevio tests on: Nvme1n1p2
00:06:29.070    Test: blockdev write read block ...passed
00:06:29.070    Test: blockdev write zeroes read block ...passed
00:06:29.070    Test: blockdev write zeroes read no split ...passed
00:06:29.070    Test: blockdev write zeroes read split ...passed
00:06:29.070    Test: blockdev write zeroes read split partial ...passed
00:06:29.070    Test: blockdev reset ...[2024-11-20 07:50:45.002410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:06:29.070  [2024-11-20 07:50:45.004264] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:06:29.070  passed
00:06:29.070    Test: blockdev write read 8 blocks ...passed
00:06:29.070    Test: blockdev write read size > 128k ...passed
00:06:29.070    Test: blockdev write read invalid size ...passed
00:06:29.070    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:29.070    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:29.070    Test: blockdev write read max offset ...passed
00:06:29.070    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:29.070    Test: blockdev writev readv 8 blocks ...passed
00:06:29.070    Test: blockdev writev readv 30 x 1block ...passed
00:06:29.070    Test: blockdev writev readv block ...passed
00:06:29.070    Test: blockdev writev readv size > 128k ...passed
00:06:29.070    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:29.070    Test: blockdev comparev and writev ...[2024-11-20 07:50:45.009295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2eda10000 len:0x1000
00:06:29.070  [2024-11-20 07:50:45.009342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:29.070  passed
00:06:29.070    Test: blockdev nvme passthru rw ...passed
00:06:29.070    Test: blockdev nvme passthru vendor specific ...passed
00:06:29.070    Test: blockdev nvme admin passthru ...passed
00:06:29.070    Test: blockdev copy ...passed
00:06:29.070  Suite: bdevio tests on: Nvme1n1p1
00:06:29.070    Test: blockdev write read block ...passed
00:06:29.070    Test: blockdev write zeroes read block ...passed
00:06:29.070    Test: blockdev write zeroes read no split ...passed
00:06:29.070    Test: blockdev write zeroes read split ...passed
00:06:29.070    Test: blockdev write zeroes read split partial ...passed
00:06:29.070    Test: blockdev reset ...[2024-11-20 07:50:45.022009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:06:29.070  passed
00:06:29.070    Test: blockdev write read 8 blocks ...[2024-11-20 07:50:45.023516] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:06:29.070  passed
00:06:29.070    Test: blockdev write read size > 128k ...passed
00:06:29.070    Test: blockdev write read invalid size ...passed
00:06:29.070    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:29.070    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:29.070    Test: blockdev write read max offset ...passed
00:06:29.070    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:29.070    Test: blockdev writev readv 8 blocks ...passed
00:06:29.070    Test: blockdev writev readv 30 x 1block ...passed
00:06:29.070    Test: blockdev writev readv block ...passed
00:06:29.070    Test: blockdev writev readv size > 128k ...passed
00:06:29.070    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:29.070    Test: blockdev comparev and writev ...[2024-11-20 07:50:45.027688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2eda05000 len:0x1000
00:06:29.070  [2024-11-20 07:50:45.027728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:29.070  passed
00:06:29.070    Test: blockdev nvme passthru rw ...passed
00:06:29.070    Test: blockdev nvme passthru vendor specific ...passed
00:06:29.070    Test: blockdev nvme admin passthru ...passed
00:06:29.070    Test: blockdev copy ...passed
00:06:29.070  Suite: bdevio tests on: Nvme0n1
00:06:29.070    Test: blockdev write read block ...passed
00:06:29.071    Test: blockdev write zeroes read block ...passed
00:06:29.071    Test: blockdev write zeroes read no split ...passed
00:06:29.071    Test: blockdev write zeroes read split ...passed
00:06:29.071    Test: blockdev write zeroes read split partial ...passed
00:06:29.071    Test: blockdev reset ...[2024-11-20 07:50:45.039259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:06:29.071  [2024-11-20 07:50:45.040864] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:06:29.071  passed
00:06:29.071    Test: blockdev write read 8 blocks ...passed
00:06:29.071    Test: blockdev write read size > 128k ...passed
00:06:29.071    Test: blockdev write read invalid size ...passed
00:06:29.071    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:29.071    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:29.071    Test: blockdev write read max offset ...passed
00:06:29.071    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:29.071    Test: blockdev writev readv 8 blocks ...passed
00:06:29.071    Test: blockdev writev readv 30 x 1block ...passed
00:06:29.071    Test: blockdev writev readv block ...passed
00:06:29.071    Test: blockdev writev readv size > 128k ...passed
00:06:29.071    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:29.071    Test: blockdev comparev and writev ...[2024-11-20 07:50:45.045177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2eda01000 len:0x1000
00:06:29.071  [2024-11-20 07:50:45.045217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:29.071  passed
00:06:29.071    Test: blockdev nvme passthru rw ...passed
00:06:29.071    Test: blockdev nvme passthru vendor specific ...passed
00:06:29.071    Test: blockdev nvme admin passthru ...[2024-11-20 07:50:45.045740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:06:29.071  [2024-11-20 07:50:45.045765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:29.071  passed
00:06:29.071    Test: blockdev copy ...passed
00:06:29.071  
00:06:29.071  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:29.071                suites      3      3    n/a      0        0
00:06:29.071                 tests     69     69     69      0        0
00:06:29.071               asserts    436    436    436      0      n/a
00:06:29.071  
00:06:29.071  Elapsed time =    0.144 seconds
00:06:29.071  0
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60003
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60003 ']'
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60003
00:06:29.071    07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:29.071    07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60003
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:29.071  killing process with pid 60003
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60003'
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60003
00:06:29.071   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60003
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:06:29.330  
00:06:29.330  real	0m0.967s
00:06:29.330  user	0m2.383s
00:06:29.330  sys	0m0.257s
00:06:29.330  ************************************
00:06:29.330  END TEST bdev_bounds
00:06:29.330  ************************************
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:06:29.330   07:50:45 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2' ''
00:06:29.330   07:50:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:29.330   07:50:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:29.330   07:50:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:29.330  ************************************
00:06:29.330  START TEST bdev_nbd
00:06:29.330  ************************************
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2' ''
00:06:29.330    07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2')
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=3
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=3
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10')
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2')
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60043
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60043 /var/tmp/spdk-nbd.sock
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60043 ']'
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:29.330  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:29.330   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:06:29.330  [2024-11-20 07:50:45.357918] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:29.330  [2024-11-20 07:50:45.358033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:06:29.588  [2024-11-20 07:50:45.493276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:29.589  [2024-11-20 07:50:45.564627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2'
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2')
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2'
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2')
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:06:29.847   07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 3 ))
00:06:29.847    07:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:06:30.414   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:06:30.414    07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:06:30.414   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:06:30.414   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:30.414   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.414   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.414   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.415  1+0 records in
00:06:30.415  1+0 records out
00:06:30.415  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478132 s, 8.6 MB/s
00:06:30.415    07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:30.415   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 3 ))
00:06:30.415    07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:06:30.673    07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.673  1+0 records in
00:06:30.673  1+0 records out
00:06:30.673  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465528 s, 8.8 MB/s
00:06:30.673    07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:30.673   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 3 ))
00:06:30.673    07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:06:30.931    07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.931  1+0 records in
00:06:30.931  1+0 records out
00:06:30.931  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553717 s, 7.4 MB/s
00:06:30.931    07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:30.931   07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 3 ))
00:06:30.931    07:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:31.188   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:06:31.188    {
00:06:31.188      "nbd_device": "/dev/nbd0",
00:06:31.188      "bdev_name": "Nvme0n1"
00:06:31.188    },
00:06:31.188    {
00:06:31.188      "nbd_device": "/dev/nbd1",
00:06:31.188      "bdev_name": "Nvme1n1p1"
00:06:31.188    },
00:06:31.188    {
00:06:31.188      "nbd_device": "/dev/nbd2",
00:06:31.188      "bdev_name": "Nvme1n1p2"
00:06:31.188    }
00:06:31.188  ]'
00:06:31.188   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:06:31.188    07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:06:31.188    {
00:06:31.188      "nbd_device": "/dev/nbd0",
00:06:31.188      "bdev_name": "Nvme0n1"
00:06:31.188    },
00:06:31.188    {
00:06:31.188      "nbd_device": "/dev/nbd1",
00:06:31.188      "bdev_name": "Nvme1n1p1"
00:06:31.188    },
00:06:31.188    {
00:06:31.188      "nbd_device": "/dev/nbd2",
00:06:31.188      "bdev_name": "Nvme1n1p2"
00:06:31.188    }
00:06:31.188  ]'
00:06:31.188    07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2'
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2')
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:31.445   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:31.702    07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:31.702   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:31.959    07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:31.959   07:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:06:32.216    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:32.216   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:32.216    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:32.216    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:32.216     07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:32.473    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:32.473     07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:32.473     07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:32.473    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:32.473     07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:06:32.473     07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:32.473     07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:06:32.473    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:06:32.473    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2' '/dev/nbd0 /dev/nbd1 /dev/nbd10'
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2')
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10')
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2' '/dev/nbd0 /dev/nbd1 /dev/nbd10'
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2')
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10')
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 3 ))
00:06:32.473   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:06:33.037  /dev/nbd0
00:06:33.037    07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:33.037  1+0 records in
00:06:33.037  1+0 records out
00:06:33.037  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768479 s, 5.3 MB/s
00:06:33.037    07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 3 ))
00:06:33.037   07:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1
00:06:33.294  /dev/nbd1
00:06:33.294    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:33.294  1+0 records in
00:06:33.294  1+0 records out
00:06:33.294  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600701 s, 6.8 MB/s
00:06:33.294    07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 3 ))
00:06:33.294   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10
00:06:33.552  /dev/nbd10
00:06:33.553    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:33.553  1+0 records in
00:06:33.553  1+0 records out
00:06:33.553  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572087 s, 7.2 MB/s
00:06:33.553    07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:33.553   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 3 ))
00:06:33.553    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:33.553    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:33.553     07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:33.811    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:33.811    {
00:06:33.811      "nbd_device": "/dev/nbd0",
00:06:33.811      "bdev_name": "Nvme0n1"
00:06:33.811    },
00:06:33.811    {
00:06:33.811      "nbd_device": "/dev/nbd1",
00:06:33.811      "bdev_name": "Nvme1n1p1"
00:06:33.811    },
00:06:33.811    {
00:06:33.811      "nbd_device": "/dev/nbd10",
00:06:33.811      "bdev_name": "Nvme1n1p2"
00:06:33.811    }
00:06:33.811  ]'
00:06:33.811     07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:06:33.811    {
00:06:33.811      "nbd_device": "/dev/nbd0",
00:06:33.811      "bdev_name": "Nvme0n1"
00:06:33.811    },
00:06:33.811    {
00:06:33.811      "nbd_device": "/dev/nbd1",
00:06:33.811      "bdev_name": "Nvme1n1p1"
00:06:33.811    },
00:06:33.811    {
00:06:33.811      "nbd_device": "/dev/nbd10",
00:06:33.811      "bdev_name": "Nvme1n1p2"
00:06:33.811    }
00:06:33.811  ]'
00:06:33.811     07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:33.811    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:33.811  /dev/nbd1
00:06:33.812  /dev/nbd10'
00:06:33.812     07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:33.812  /dev/nbd1
00:06:33.812  /dev/nbd10'
00:06:33.812     07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:33.812    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=3
00:06:33.812    07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 3
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=3
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 3 -ne 3 ']'
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10' write
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10')
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:06:33.812  256+0 records in
00:06:33.812  256+0 records out
00:06:33.812  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007567 s, 139 MB/s
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:33.812   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:34.070  256+0 records in
00:06:34.070  256+0 records out
00:06:34.070  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0853534 s, 12.3 MB/s
00:06:34.070   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:34.070   07:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:34.070  256+0 records in
00:06:34.070  256+0 records out
00:06:34.070  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0915701 s, 11.5 MB/s
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:06:34.070  256+0 records in
00:06:34.070  256+0 records out
00:06:34.070  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0886202 s, 11.8 MB/s
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10' verify
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10')
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:34.070   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10'
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10')
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:34.329   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:34.587    07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:34.587   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:34.845    07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:34.845   07:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:06:35.137    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:35.137   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:35.138    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:35.138    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:35.138     07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:35.432    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:35.432     07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:35.432     07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:35.432    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:35.432     07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:06:35.432     07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:35.432     07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:06:35.432    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:06:35.432    07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:06:35.432   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:06:35.691  malloc_lvol_verify
00:06:35.949   07:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:06:36.208  865ce65e-b147-4738-a2b2-034cf7cb1628
00:06:36.208   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:06:36.466  176d2b1d-27fb-4a05-820e-06f4a321a60d
00:06:36.466   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:06:36.724  /dev/nbd0
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:06:36.724  mke2fs 1.47.0 (5-Feb-2023)
00:06:36.724  Discarding device blocks:    0/4096         done                            
00:06:36.724  Creating filesystem with 4096 1k blocks and 1024 inodes
00:06:36.724  
00:06:36.724  Allocating group tables: 0/1   done                            
00:06:36.724  Writing inode tables: 0/1   done                            
00:06:36.724  Creating journal (1024 blocks): done
00:06:36.724  Writing superblocks and filesystem accounting information: 0/1   done
00:06:36.724  
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:36.724   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:36.983    07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60043
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60043 ']'
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60043
00:06:36.983    07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:36.983    07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60043
00:06:36.983  killing process with pid 60043
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60043'
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60043
00:06:36.983   07:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60043
00:06:37.241   07:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:06:37.241  
00:06:37.241  real	0m7.902s
00:06:37.241  user	0m11.875s
00:06:37.241  sys	0m2.777s
00:06:37.241   07:50:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.241   07:50:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:06:37.241  ************************************
00:06:37.241  END TEST bdev_nbd
00:06:37.241  ************************************
00:06:37.241   07:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:06:37.241   07:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']'
00:06:37.241   07:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']'
00:06:37.241  skipping fio tests on NVMe due to multi-ns failures.
00:06:37.242   07:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:06:37.242   07:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:06:37.242   07:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:06:37.242   07:50:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:06:37.242   07:50:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.242   07:50:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:37.242  ************************************
00:06:37.242  START TEST bdev_verify
00:06:37.242  ************************************
00:06:37.242   07:50:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:06:37.500  [2024-11-20 07:50:53.301210] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:37.500  [2024-11-20 07:50:53.301352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ]
00:06:37.500  [2024-11-20 07:50:53.434719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:37.500  [2024-11-20 07:50:53.502361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:37.500  [2024-11-20 07:50:53.502366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:37.759  Running I/O for 5 seconds...
00:06:40.070      32000.00 IOPS,   125.00 MiB/s
[2024-11-20T07:50:57.047Z]     31680.00 IOPS,   123.75 MiB/s
[2024-11-20T07:50:57.979Z]     31317.33 IOPS,   122.33 MiB/s
[2024-11-20T07:50:58.922Z]     31168.00 IOPS,   121.75 MiB/s
[2024-11-20T07:50:58.922Z]     31372.80 IOPS,   122.55 MiB/s
00:06:42.882                                                                                                  Latency(us)
00:06:42.882  
[2024-11-20T07:50:58.922Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:42.882  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:42.882  	 Verification LBA range: start 0x0 length 0xa0000
00:06:42.882  	 Nvme0n1             :       5.02    5151.57      20.12       0.00     0.00   24792.46    5004.57   26333.56
00:06:42.882  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:42.882  	 Verification LBA range: start 0xa0000 length 0xa0000
00:06:42.882  	 Nvme0n1             :       5.02    5273.59      20.60       0.00     0.00   24232.58    4676.89   24546.21
00:06:42.882  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:42.882  	 Verification LBA range: start 0x0 length 0x4ff80
00:06:42.882  	 Nvme1n1p1           :       5.02    5151.03      20.12       0.00     0.00   24765.64    5242.88   25618.62
00:06:42.882  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:42.882  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:06:42.882  	 Nvme1n1p1           :       5.03    5271.98      20.59       0.00     0.00   24221.43    4974.78   24784.52
00:06:42.882  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:42.882  	 Verification LBA range: start 0x0 length 0x4ff7f
00:06:42.882  	 Nvme1n1p2           :       5.02    5158.88      20.15       0.00     0.00   24701.52    2755.49   27167.65
00:06:42.882  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:42.882  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:06:42.882  	 Nvme1n1p2           :       5.03    5271.48      20.59       0.00     0.00   24180.68    4736.47   23354.65
00:06:42.882  
[2024-11-20T07:50:58.922Z]  ===================================================================================================================
00:06:42.882  
[2024-11-20T07:50:58.922Z]  Total                       :              31278.53     122.18       0.00     0.00   24479.16    2755.49   27167.65
00:06:43.465  
00:06:43.465  real	0m6.145s
00:06:43.465  user	0m11.609s
00:06:43.465  sys	0m0.172s
00:06:43.465   07:50:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:43.465   07:50:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:06:43.465  ************************************
00:06:43.465  END TEST bdev_verify
00:06:43.465  ************************************
00:06:43.465   07:50:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:06:43.465   07:50:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:06:43.465   07:50:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:43.465   07:50:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:43.465  ************************************
00:06:43.465  START TEST bdev_verify_big_io
00:06:43.465  ************************************
00:06:43.465   07:50:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:06:43.723  [2024-11-20 07:50:59.512721] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:43.723  [2024-11-20 07:50:59.512850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ]
00:06:43.723  [2024-11-20 07:50:59.647675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:43.723  [2024-11-20 07:50:59.716556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:43.723  [2024-11-20 07:50:59.716565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.982  Running I/O for 5 seconds...
00:06:46.852       2719.00 IOPS,   169.94 MiB/s
[2024-11-20T07:51:03.458Z]      3089.50 IOPS,   193.09 MiB/s
[2024-11-20T07:51:04.391Z]      2955.67 IOPS,   184.73 MiB/s
[2024-11-20T07:51:05.326Z]      2934.75 IOPS,   183.42 MiB/s
[2024-11-20T07:51:05.326Z]      3001.80 IOPS,   187.61 MiB/s
00:06:49.286                                                                                                  Latency(us)
00:06:49.286  
[2024-11-20T07:51:05.326Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:49.286  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:49.286  	 Verification LBA range: start 0x0 length 0xa000
00:06:49.286  	 Nvme0n1             :       5.19     518.21      32.39       0.00     0.00  244577.94   16920.20  274536.26
00:06:49.286  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:49.286  	 Verification LBA range: start 0xa000 length 0xa000
00:06:49.286  	 Nvme0n1             :       5.16     471.42      29.46       0.00     0.00  267639.85    6583.39  326011.81
00:06:49.286  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:49.286  	 Verification LBA range: start 0x0 length 0x4ff8
00:06:49.286  	 Nvme1n1p1           :       5.19     518.07      32.38       0.00     0.00  241354.54   30742.34  272629.76
00:06:49.286  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:49.286  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:06:49.286  	 Nvme1n1p1           :       5.16     471.29      29.46       0.00     0.00  263351.34   18111.77  280255.77
00:06:49.286  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:49.286  	 Verification LBA range: start 0x0 length 0x4ff7
00:06:49.286  	 Nvme1n1p2           :       5.19     523.69      32.73       0.00     0.00  235329.54    1817.13  301227.29
00:06:49.286  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:49.286  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:06:49.286  	 Nvme1n1p2           :       5.20     492.55      30.78       0.00     0.00  249010.97     901.12  329824.81
00:06:49.287  
[2024-11-20T07:51:05.327Z]  ===================================================================================================================
00:06:49.287  
[2024-11-20T07:51:05.327Z]  Total                       :               2995.24     187.20       0.00     0.00  249686.59     901.12  329824.81
00:06:50.223  
00:06:50.223  real	0m6.568s
00:06:50.223  user	0m12.440s
00:06:50.223  sys	0m0.183s
00:06:50.223   07:51:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:50.223   07:51:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:06:50.223  ************************************
00:06:50.223  END TEST bdev_verify_big_io
00:06:50.223  ************************************
00:06:50.223   07:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:50.223   07:51:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:50.223   07:51:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:50.223   07:51:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:50.223  ************************************
00:06:50.223  START TEST bdev_write_zeroes
00:06:50.223  ************************************
00:06:50.223   07:51:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:50.223  [2024-11-20 07:51:06.135847] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:50.223  [2024-11-20 07:51:06.135983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60470 ]
00:06:50.514  [2024-11-20 07:51:06.269373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:50.514  [2024-11-20 07:51:06.344567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:50.781  Running I/O for 1 seconds...
00:06:51.716      72639.00 IOPS,   283.75 MiB/s
00:06:51.716                                                                                                  Latency(us)
00:06:51.716  
[2024-11-20T07:51:07.756Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:51.716  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:51.716  	 Nvme0n1             :       1.01   24235.65      94.67       0.00     0.00    5276.55    2412.92   12928.47
00:06:51.716  Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:51.716  	 Nvme1n1p1           :       1.01   24292.55      94.89       0.00     0.00    5262.95    1980.97   11379.43
00:06:51.716  Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:51.716  	 Nvme1n1p2           :       1.01   24221.20      94.61       0.00     0.00    5275.51    1765.00   14834.97
00:06:51.716  
[2024-11-20T07:51:07.756Z]  ===================================================================================================================
00:06:51.716  
[2024-11-20T07:51:07.756Z]  Total                       :              72749.41     284.18       0.00     0.00    5271.66    1765.00   14834.97
00:06:51.974  
00:06:51.974  real	0m1.762s
00:06:51.974  user	0m1.459s
00:06:51.974  sys	0m0.185s
00:06:51.974   07:51:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:51.974   07:51:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:06:51.974  ************************************
00:06:51.974  END TEST bdev_write_zeroes
00:06:51.974  ************************************
00:06:51.974   07:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:51.974   07:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:51.974   07:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:51.974   07:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:51.974  ************************************
00:06:51.974  START TEST bdev_json_nonenclosed
00:06:51.974  ************************************
00:06:51.974   07:51:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:51.974  [2024-11-20 07:51:07.948107] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:51.974  [2024-11-20 07:51:07.948209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60506 ]
00:06:52.233  [2024-11-20 07:51:08.073334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:52.233  [2024-11-20 07:51:08.141581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.233  [2024-11-20 07:51:08.141662] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:06:52.233  [2024-11-20 07:51:08.141678] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:52.233  [2024-11-20 07:51:08.141686] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:52.233  
00:06:52.233  real	0m0.294s
00:06:52.233  user	0m0.139s
00:06:52.233  sys	0m0.052s
00:06:52.233   07:51:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:52.233   07:51:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:06:52.233  ************************************
00:06:52.233  END TEST bdev_json_nonenclosed
00:06:52.233  ************************************
00:06:52.233   07:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:52.233   07:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:52.233   07:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:52.233   07:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:52.233  ************************************
00:06:52.233  START TEST bdev_json_nonarray
00:06:52.233  ************************************
00:06:52.233   07:51:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:52.493  [2024-11-20 07:51:08.312805] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:52.493  [2024-11-20 07:51:08.312975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60530 ]
00:06:52.493  [2024-11-20 07:51:08.458463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:52.493  [2024-11-20 07:51:08.524286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.493  [2024-11-20 07:51:08.524369] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:06:52.493  [2024-11-20 07:51:08.524393] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:52.493  [2024-11-20 07:51:08.524402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:52.752  
00:06:52.752  real	0m0.326s
00:06:52.752  user	0m0.164s
00:06:52.752  sys	0m0.057s
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:06:52.752  ************************************
00:06:52.752  END TEST bdev_json_nonarray
00:06:52.752  ************************************
00:06:52.752   07:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]]
00:06:52.752   07:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]]
00:06:52.752   07:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:06:52.752   07:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:52.752   07:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:52.752   07:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:52.752  ************************************
00:06:52.752  START TEST bdev_gpt_uuid
00:06:52.752  ************************************
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60549
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 60549
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 60549 ']'
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:52.752  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:52.752   07:51:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:52.752  [2024-11-20 07:51:08.706628] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:52.752  [2024-11-20 07:51:08.706795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60549 ]
00:06:53.011  [2024-11-20 07:51:08.843434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:53.011  [2024-11-20 07:51:08.911720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:53.270   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:53.270   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:06:53.270   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:53.270   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.270   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:53.529  Some configs were skipped because the RPC state that can call them passed over.
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[
00:06:53.529  {
00:06:53.529  "name": "Nvme1n1p1",
00:06:53.529  "aliases": [
00:06:53.529  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:06:53.529  ],
00:06:53.529  "product_name": "GPT Disk",
00:06:53.529  "block_size": 4096,
00:06:53.529  "num_blocks": 655104,
00:06:53.529  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:06:53.529  "assigned_rate_limits": {
00:06:53.529  "rw_ios_per_sec": 0,
00:06:53.529  "rw_mbytes_per_sec": 0,
00:06:53.529  "r_mbytes_per_sec": 0,
00:06:53.529  "w_mbytes_per_sec": 0
00:06:53.529  },
00:06:53.529  "claimed": false,
00:06:53.529  "zoned": false,
00:06:53.529  "supported_io_types": {
00:06:53.529  "read": true,
00:06:53.529  "write": true,
00:06:53.529  "unmap": true,
00:06:53.529  "flush": true,
00:06:53.529  "reset": true,
00:06:53.529  "nvme_admin": false,
00:06:53.529  "nvme_io": false,
00:06:53.529  "nvme_io_md": false,
00:06:53.529  "write_zeroes": true,
00:06:53.529  "zcopy": false,
00:06:53.529  "get_zone_info": false,
00:06:53.529  "zone_management": false,
00:06:53.529  "zone_append": false,
00:06:53.529  "compare": true,
00:06:53.529  "compare_and_write": false,
00:06:53.529  "abort": true,
00:06:53.529  "seek_hole": false,
00:06:53.529  "seek_data": false,
00:06:53.529  "copy": true,
00:06:53.529  "nvme_iov_md": false
00:06:53.529  },
00:06:53.529  "driver_specific": {
00:06:53.529  "gpt": {
00:06:53.529  "base_bdev": "Nvme1n1",
00:06:53.529  "offset_blocks": 256,
00:06:53.529  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:06:53.529  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:06:53.529  "partition_name": "SPDK_TEST_first"
00:06:53.529  }
00:06:53.529  }
00:06:53.529  }
00:06:53.529  ]'
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]]
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]'
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:06:53.529   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:53.529    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[
00:06:53.788  {
00:06:53.788  "name": "Nvme1n1p2",
00:06:53.788  "aliases": [
00:06:53.788  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:06:53.788  ],
00:06:53.788  "product_name": "GPT Disk",
00:06:53.788  "block_size": 4096,
00:06:53.788  "num_blocks": 655103,
00:06:53.788  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:06:53.788  "assigned_rate_limits": {
00:06:53.788  "rw_ios_per_sec": 0,
00:06:53.788  "rw_mbytes_per_sec": 0,
00:06:53.788  "r_mbytes_per_sec": 0,
00:06:53.788  "w_mbytes_per_sec": 0
00:06:53.788  },
00:06:53.788  "claimed": false,
00:06:53.788  "zoned": false,
00:06:53.788  "supported_io_types": {
00:06:53.788  "read": true,
00:06:53.788  "write": true,
00:06:53.788  "unmap": true,
00:06:53.788  "flush": true,
00:06:53.788  "reset": true,
00:06:53.788  "nvme_admin": false,
00:06:53.788  "nvme_io": false,
00:06:53.788  "nvme_io_md": false,
00:06:53.788  "write_zeroes": true,
00:06:53.788  "zcopy": false,
00:06:53.788  "get_zone_info": false,
00:06:53.788  "zone_management": false,
00:06:53.788  "zone_append": false,
00:06:53.788  "compare": true,
00:06:53.788  "compare_and_write": false,
00:06:53.788  "abort": true,
00:06:53.788  "seek_hole": false,
00:06:53.788  "seek_data": false,
00:06:53.788  "copy": true,
00:06:53.788  "nvme_iov_md": false
00:06:53.788  },
00:06:53.788  "driver_specific": {
00:06:53.788  "gpt": {
00:06:53.788  "base_bdev": "Nvme1n1",
00:06:53.788  "offset_blocks": 655360,
00:06:53.788  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:06:53.788  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:06:53.788  "partition_name": "SPDK_TEST_second"
00:06:53.788  }
00:06:53.788  }
00:06:53.788  }
00:06:53.788  ]'
00:06:53.788    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]]
00:06:53.788    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]'
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:06:53.788    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 60549
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 60549 ']'
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 60549
00:06:53.788    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:53.788    07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60549
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:53.788  killing process with pid 60549
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60549'
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 60549
00:06:53.788   07:51:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 60549
00:06:54.355  
00:06:54.355  real	0m1.517s
00:06:54.355  user	0m1.699s
00:06:54.355  sys	0m0.387s
00:06:54.355   07:51:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.355   07:51:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:54.355  ************************************
00:06:54.355  END TEST bdev_gpt_uuid
00:06:54.355  ************************************
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]]
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:06:54.355   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:54.614  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:54.614  Waiting for block devices as requested
00:06:54.614  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:06:54.872  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:54.872   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:06:54.872   07:51:10 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:06:55.131  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:06:55.131  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:06:55.131  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:06:55.131  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:06:55.131   07:51:11 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:06:55.131  
00:06:55.131  real	0m32.668s
00:06:55.131  user	0m48.155s
00:06:55.131  sys	0m6.443s
00:06:55.131   07:51:11 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.131   07:51:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:55.131  ************************************
00:06:55.132  END TEST blockdev_nvme_gpt
00:06:55.132  ************************************
00:06:55.132   07:51:11  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:06:55.132   07:51:11  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.132   07:51:11  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.132   07:51:11  -- common/autotest_common.sh@10 -- # set +x
00:06:55.132  ************************************
00:06:55.132  START TEST nvme
00:06:55.132  ************************************
00:06:55.132   07:51:11 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:06:55.132  * Looking for test storage...
00:06:55.132  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:06:55.132    07:51:11 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:55.132     07:51:11 nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:06:55.132     07:51:11 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:55.390    07:51:11 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:55.390    07:51:11 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.390    07:51:11 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.390    07:51:11 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.390    07:51:11 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.390    07:51:11 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.390    07:51:11 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.390    07:51:11 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.390    07:51:11 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.390    07:51:11 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.390    07:51:11 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.390    07:51:11 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.390    07:51:11 nvme -- scripts/common.sh@344 -- # case "$op" in
00:06:55.390    07:51:11 nvme -- scripts/common.sh@345 -- # : 1
00:06:55.390    07:51:11 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.390    07:51:11 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.390     07:51:11 nvme -- scripts/common.sh@365 -- # decimal 1
00:06:55.390     07:51:11 nvme -- scripts/common.sh@353 -- # local d=1
00:06:55.390     07:51:11 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.390     07:51:11 nvme -- scripts/common.sh@355 -- # echo 1
00:06:55.391    07:51:11 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.391     07:51:11 nvme -- scripts/common.sh@366 -- # decimal 2
00:06:55.391     07:51:11 nvme -- scripts/common.sh@353 -- # local d=2
00:06:55.391     07:51:11 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.391     07:51:11 nvme -- scripts/common.sh@355 -- # echo 2
00:06:55.391    07:51:11 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.391    07:51:11 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.391    07:51:11 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.391    07:51:11 nvme -- scripts/common.sh@368 -- # return 0
00:06:55.391    07:51:11 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.391    07:51:11 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:55.391  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.391  		--rc genhtml_branch_coverage=1
00:06:55.391  		--rc genhtml_function_coverage=1
00:06:55.391  		--rc genhtml_legend=1
00:06:55.391  		--rc geninfo_all_blocks=1
00:06:55.391  		--rc geninfo_unexecuted_blocks=1
00:06:55.391  		
00:06:55.391  		'
00:06:55.391    07:51:11 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:55.391  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.391  		--rc genhtml_branch_coverage=1
00:06:55.391  		--rc genhtml_function_coverage=1
00:06:55.391  		--rc genhtml_legend=1
00:06:55.391  		--rc geninfo_all_blocks=1
00:06:55.391  		--rc geninfo_unexecuted_blocks=1
00:06:55.391  		
00:06:55.391  		'
00:06:55.391    07:51:11 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:55.391  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.391  		--rc genhtml_branch_coverage=1
00:06:55.391  		--rc genhtml_function_coverage=1
00:06:55.391  		--rc genhtml_legend=1
00:06:55.391  		--rc geninfo_all_blocks=1
00:06:55.391  		--rc geninfo_unexecuted_blocks=1
00:06:55.391  		
00:06:55.391  		'
00:06:55.391    07:51:11 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:55.391  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.391  		--rc genhtml_branch_coverage=1
00:06:55.391  		--rc genhtml_function_coverage=1
00:06:55.391  		--rc genhtml_legend=1
00:06:55.391  		--rc geninfo_all_blocks=1
00:06:55.391  		--rc geninfo_unexecuted_blocks=1
00:06:55.391  		
00:06:55.391  		'
00:06:55.391   07:51:11 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:55.958  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:55.958  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:55.958  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:55.958    07:51:11 nvme -- nvme/nvme.sh@79 -- # uname
00:06:55.958   07:51:11 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:06:55.958   07:51:11 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:06:55.958   07:51:11 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1075 -- # stubpid=60941
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:06:55.958  Waiting for stub to ready for secondary processes...
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/60941 ]]
00:06:55.958   07:51:11 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:06:55.958  [2024-11-20 07:51:11.984646] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:06:55.958  [2024-11-20 07:51:11.984764] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:06:57.335   07:51:12 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:06:57.335   07:51:12 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/60941 ]]
00:06:57.335   07:51:12 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:06:57.335  [2024-11-20 07:51:13.261093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:57.335  [2024-11-20 07:51:13.322879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:57.335  [2024-11-20 07:51:13.322998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:57.335  [2024-11-20 07:51:13.323010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:57.335  [2024-11-20 07:51:13.330532] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:06:57.335  [2024-11-20 07:51:13.330584] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:06:57.335  [2024-11-20 07:51:13.344479] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:06:57.335  [2024-11-20 07:51:13.344606] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:06:57.335  [2024-11-20 07:51:13.345089] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:06:57.335  [2024-11-20 07:51:13.345385] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created
00:06:57.335  [2024-11-20 07:51:13.345436] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created
00:06:58.272   07:51:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:06:58.272  done.
00:06:58.272   07:51:13 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:06:58.272   07:51:13 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:06:58.272   07:51:13 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:06:58.272   07:51:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.272   07:51:13 nvme -- common/autotest_common.sh@10 -- # set +x
00:06:58.272  ************************************
00:06:58.272  START TEST nvme_reset
00:06:58.272  ************************************
00:06:58.272   07:51:13 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:06:58.272  Initializing NVMe Controllers
00:06:58.272  Skipping QEMU NVMe SSD at 0000:00:10.0
00:06:58.272  Skipping QEMU NVMe SSD at 0000:00:11.0
00:06:58.272  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:06:58.272  
00:06:58.272  real	0m0.196s
00:06:58.272  user	0m0.062s
00:06:58.272  sys	0m0.092s
00:06:58.272   07:51:14 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.272   07:51:14 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:06:58.272  ************************************
00:06:58.272  END TEST nvme_reset
00:06:58.272  ************************************
00:06:58.272   07:51:14 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:06:58.272   07:51:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.272   07:51:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.272   07:51:14 nvme -- common/autotest_common.sh@10 -- # set +x
00:06:58.272  ************************************
00:06:58.272  START TEST nvme_identify
00:06:58.272  ************************************
00:06:58.272   07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:06:58.272   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:06:58.272   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:06:58.272   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:06:58.272    07:51:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:06:58.272    07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:58.272    07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:06:58.272    07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:58.272     07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:58.272     07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:58.272    07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:06:58.272    07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:06:58.272   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:06:58.533  [2024-11-20 07:51:14.462214] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 60970 terminated unexpected
00:06:58.533  =====================================================
00:06:58.533  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:06:58.533  =====================================================
00:06:58.533  Controller Capabilities/Features
00:06:58.533  ================================
00:06:58.533  Vendor ID:                             1b36
00:06:58.533  Subsystem Vendor ID:                   1af4
00:06:58.533  Serial Number:                         12340
00:06:58.533  Model Number:                          QEMU NVMe Ctrl
00:06:58.533  Firmware Version:                      8.0.0
00:06:58.533  Recommended Arb Burst:                 6
00:06:58.533  IEEE OUI Identifier:                   00 54 52
00:06:58.533  Multi-path I/O
00:06:58.533    May have multiple subsystem ports:   No
00:06:58.533    May have multiple controllers:       No
00:06:58.533    Associated with SR-IOV VF:           No
00:06:58.533  Max Data Transfer Size:                524288
00:06:58.533  Max Number of Namespaces:              256
00:06:58.533  Max Number of I/O Queues:              64
00:06:58.533  NVMe Specification Version (VS):       1.4
00:06:58.533  NVMe Specification Version (Identify): 1.4
00:06:58.533  Maximum Queue Entries:                 2048
00:06:58.533  Contiguous Queues Required:            Yes
00:06:58.533  Arbitration Mechanisms Supported
00:06:58.533    Weighted Round Robin:                Not Supported
00:06:58.533    Vendor Specific:                     Not Supported
00:06:58.533  Reset Timeout:                         7500 ms
00:06:58.533  Doorbell Stride:                       4 bytes
00:06:58.533  NVM Subsystem Reset:                   Not Supported
00:06:58.533  Command Sets Supported
00:06:58.533    NVM Command Set:                     Supported
00:06:58.533  Boot Partition:                        Not Supported
00:06:58.533  Memory Page Size Minimum:              4096 bytes
00:06:58.533  Memory Page Size Maximum:              65536 bytes
00:06:58.533  Persistent Memory Region:              Supported
00:06:58.533  Optional Asynchronous Events Supported
00:06:58.533    Namespace Attribute Notices:         Supported
00:06:58.533    Firmware Activation Notices:         Not Supported
00:06:58.533    ANA Change Notices:                  Not Supported
00:06:58.533    PLE Aggregate Log Change Notices:    Not Supported
00:06:58.533    LBA Status Info Alert Notices:       Not Supported
00:06:58.533    EGE Aggregate Log Change Notices:    Not Supported
00:06:58.533    Normal NVM Subsystem Shutdown event: Not Supported
00:06:58.533    Zone Descriptor Change Notices:      Not Supported
00:06:58.533    Discovery Log Change Notices:        Not Supported
00:06:58.533  Controller Attributes
00:06:58.533    128-bit Host Identifier:             Not Supported
00:06:58.533    Non-Operational Permissive Mode:     Not Supported
00:06:58.533    NVM Sets:                            Not Supported
00:06:58.533    Read Recovery Levels:                Not Supported
00:06:58.533    Endurance Groups:                    Not Supported
00:06:58.533    Predictable Latency Mode:            Not Supported
00:06:58.533    Traffic Based Keep ALive:            Not Supported
00:06:58.533    Namespace Granularity:               Not Supported
00:06:58.533    SQ Associations:                     Not Supported
00:06:58.533    UUID List:                           Not Supported
00:06:58.533    Multi-Domain Subsystem:              Not Supported
00:06:58.533    Fixed Capacity Management:           Not Supported
00:06:58.533    Variable Capacity Management:        Not Supported
00:06:58.533    Delete Endurance Group:              Not Supported
00:06:58.533    Delete NVM Set:                      Not Supported
00:06:58.533    Extended LBA Formats Supported:      Supported
00:06:58.533    Flexible Data Placement Supported:   Not Supported
00:06:58.533  
00:06:58.533  Controller Memory Buffer Support
00:06:58.533  ================================
00:06:58.533  Supported:                             Yes
00:06:58.533  Total Size:                            134217728 bytes
00:06:58.533  Submission Queues in CMB:              Supported
00:06:58.533  Completion Queues in CMB:              Not Supported
00:06:58.533  Read data and metadata in CMB          Supported
00:06:58.533  Write data and metadata in CMB:        Supported
00:06:58.533  
00:06:58.533  Persistent Memory Region Support
00:06:58.533  ================================
00:06:58.533  Supported:                             Yes
00:06:58.533  Total Size:                            33554432 bytes
00:06:58.533  Read data and metadata in PMR          Supported
00:06:58.533  Write data and metadata in PMR:        Supported
00:06:58.534  
00:06:58.534  Admin Command Set Attributes
00:06:58.534  ============================
00:06:58.534  Security Send/Receive:                 Not Supported
00:06:58.534  Format NVM:                            Supported
00:06:58.534  Firmware Activate/Download:            Not Supported
00:06:58.534  Namespace Management:                  Supported
00:06:58.534  Device Self-Test:                      Not Supported
00:06:58.534  Directives:                            Supported
00:06:58.534  NVMe-MI:                               Not Supported
00:06:58.534  Virtualization Management:             Not Supported
00:06:58.534  Doorbell Buffer Config:                Supported
00:06:58.534  Get LBA Status Capability:             Not Supported
00:06:58.534  Command & Feature Lockdown Capability: Not Supported
00:06:58.534  Abort Command Limit:                   4
00:06:58.534  Async Event Request Limit:             4
00:06:58.534  Number of Firmware Slots:              N/A
00:06:58.534  Firmware Slot 1 Read-Only:             N/A
00:06:58.534  Firmware Activation Without Reset:     N/A
00:06:58.534  Multiple Update Detection Support:     N/A
00:06:58.534  Firmware Update Granularity:           No Information Provided
00:06:58.534  Per-Namespace SMART Log:               Yes
00:06:58.534  Asymmetric Namespace Access Log Page:  Not Supported
00:06:58.534  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:06:58.534  Command Effects Log Page:              Supported
00:06:58.534  Get Log Page Extended Data:            Supported
00:06:58.534  Telemetry Log Pages:                   Not Supported
00:06:58.534  Persistent Event Log Pages:            Not Supported
00:06:58.534  Supported Log Pages Log Page:          May Support
00:06:58.534  Commands Supported & Effects Log Page: Not Supported
00:06:58.534  Feature Identifiers & Effects Log Page:May Support
00:06:58.534  NVMe-MI Commands & Effects Log Page:   May Support
00:06:58.534  Data Area 4 for Telemetry Log:         Not Supported
00:06:58.534  Error Log Page Entries Supported:      1
00:06:58.534  Keep Alive:                            Not Supported
00:06:58.534  
00:06:58.534  NVM Command Set Attributes
00:06:58.534  ==========================
00:06:58.534  Submission Queue Entry Size
00:06:58.534    Max:                       64
00:06:58.534    Min:                       64
00:06:58.534  Completion Queue Entry Size
00:06:58.534    Max:                       16
00:06:58.534    Min:                       16
00:06:58.534  Number of Namespaces:        256
00:06:58.534  Compare Command:             Supported
00:06:58.534  Write Uncorrectable Command: Not Supported
00:06:58.534  Dataset Management Command:  Supported
00:06:58.534  Write Zeroes Command:        Supported
00:06:58.534  Set Features Save Field:     Supported
00:06:58.534  Reservations:                Not Supported
00:06:58.534  Timestamp:                   Supported
00:06:58.534  Copy:                        Supported
00:06:58.534  Volatile Write Cache:        Present
00:06:58.534  Atomic Write Unit (Normal):  1
00:06:58.534  Atomic Write Unit (PFail):   1
00:06:58.534  Atomic Compare & Write Unit: 1
00:06:58.534  Fused Compare & Write:       Not Supported
00:06:58.534  Scatter-Gather List
00:06:58.534    SGL Command Set:           Supported
00:06:58.534    SGL Keyed:                 Not Supported
00:06:58.534    SGL Bit Bucket Descriptor: Not Supported
00:06:58.534    SGL Metadata Pointer:      Not Supported
00:06:58.534    Oversized SGL:             Not Supported
00:06:58.534    SGL Metadata Address:      Not Supported
00:06:58.534    SGL Offset:                Not Supported
00:06:58.534    Transport SGL Data Block:  Not Supported
00:06:58.534  Replay Protected Memory Block:  Not Supported
00:06:58.534  
00:06:58.534  Firmware Slot Information
00:06:58.534  =========================
00:06:58.534  Active slot:                 1
00:06:58.534  Slot 1 Firmware Revision:    1.0
00:06:58.534  
00:06:58.534  
00:06:58.534  Commands Supported and Effects
00:06:58.534  ==============================
00:06:58.534  Admin Commands
00:06:58.534  --------------
00:06:58.534     Delete I/O Submission Queue (00h): Supported 
00:06:58.534     Create I/O Submission Queue (01h): Supported 
00:06:58.534                    Get Log Page (02h): Supported 
00:06:58.534     Delete I/O Completion Queue (04h): Supported 
00:06:58.534     Create I/O Completion Queue (05h): Supported 
00:06:58.534                        Identify (06h): Supported 
00:06:58.534                           Abort (08h): Supported 
00:06:58.534                    Set Features (09h): Supported 
00:06:58.534                    Get Features (0Ah): Supported 
00:06:58.534      Asynchronous Event Request (0Ch): Supported 
00:06:58.534            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:06:58.534                  Directive Send (19h): Supported 
00:06:58.534               Directive Receive (1Ah): Supported 
00:06:58.534       Virtualization Management (1Ch): Supported 
00:06:58.534          Doorbell Buffer Config (7Ch): Supported 
00:06:58.534                      Format NVM (80h): Supported LBA-Change 
00:06:58.534  I/O Commands
00:06:58.534  ------------
00:06:58.534                           Flush (00h): Supported LBA-Change 
00:06:58.534                           Write (01h): Supported LBA-Change 
00:06:58.534                            Read (02h): Supported 
00:06:58.534                         Compare (05h): Supported 
00:06:58.534                    Write Zeroes (08h): Supported LBA-Change 
00:06:58.534              Dataset Management (09h): Supported LBA-Change 
00:06:58.534                         Unknown (0Ch): Supported 
00:06:58.534                         Unknown (12h): Supported 
00:06:58.534                            Copy (19h): Supported LBA-Change 
00:06:58.534                         Unknown (1Dh): Supported LBA-Change 
00:06:58.534  
00:06:58.534  Error Log
00:06:58.534  =========
00:06:58.534  
00:06:58.534  Arbitration
00:06:58.534  ===========
00:06:58.534  Arbitration Burst:           no limit
00:06:58.534  
00:06:58.534  Power Management
00:06:58.534  ================
00:06:58.534  Number of Power States:          1
00:06:58.534  Current Power State:             Power State #0
00:06:58.534  Power State #0:
00:06:58.534    Max Power:                     25.00 W
00:06:58.534    Non-Operational State:         Operational
00:06:58.534    Entry Latency:                 16 microseconds
00:06:58.534    Exit Latency:                  4 microseconds
00:06:58.534    Relative Read Throughput:      0
00:06:58.534    Relative Read Latency:         0
00:06:58.534    Relative Write Throughput:     0
00:06:58.534    Relative Write Latency:        0
00:06:58.534    Idle Power:                     Not Reported
00:06:58.534    Active Power:                   Not Reported
00:06:58.534  Non-Operational Permissive Mode: Not Supported
00:06:58.534  
00:06:58.534  Health Information
00:06:58.534  ==================
00:06:58.534  Critical Warnings:
00:06:58.534    Available Spare Space:     OK
00:06:58.534    Temperature:               OK
00:06:58.534    Device Reliability:        OK
00:06:58.534    Read Only:                 No
00:06:58.534    Volatile Memory Backup:    OK
00:06:58.534  Current Temperature:         323 Kelvin (50 Celsius)
00:06:58.534  Temperature Threshold:       343 Kelvin (70 Celsius)
00:06:58.534  Available Spare:             0%
00:06:58.534  Available Spare Threshold:   0%
00:06:58.534  Life Percentage Used:        0%
00:06:58.534  Data Units Read:             2721
00:06:58.534  Data Units Written:          2650
00:06:58.534  Host Read Commands:          144997
00:06:58.534  Host Write Commands:         144421
00:06:58.534  Controller Busy Time:        0 minutes
00:06:58.534  Power Cycles:                0
00:06:58.534  Power On Hours:              0 hours
00:06:58.534  Unsafe Shutdowns:            0
00:06:58.534  Unrecoverable Media Errors:  0
00:06:58.534  Lifetime Error Log Entries:  0
00:06:58.534  Warning Temperature Time:    0 minutes
00:06:58.534  Critical Temperature Time:   0 minutes
00:06:58.534  
00:06:58.534  Number of Queues
00:06:58.534  ================
00:06:58.534  Number of I/O Submission Queues:      64
00:06:58.534  Number of I/O Completion Queues:      64
00:06:58.534  
00:06:58.534  ZNS Specific Controller Data
00:06:58.534  ============================
00:06:58.534  Zone Append Size Limit:      0
00:06:58.534  
00:06:58.534  
00:06:58.534  Active Namespaces
00:06:58.534  =================
00:06:58.534  Namespace ID:1
00:06:58.534  Error Recovery Timeout:                Unlimited
00:06:58.534  Command Set Identifier:                NVM (00h)
00:06:58.534  Deallocate:                            Supported
00:06:58.534  Deallocated/Unwritten Error:           Supported
00:06:58.534  Deallocated Read Value:                All 0x00
00:06:58.534  Deallocate in Write Zeroes:            Not Supported
00:06:58.534  Deallocated Guard Field:               0xFFFF
00:06:58.534  Flush:                                 Supported
00:06:58.534  Reservation:                           Not Supported
00:06:58.534  Namespace Sharing Capabilities:        Private
00:06:58.534  Size (in LBAs):                        1310720 (5GiB)
00:06:58.534  Capacity (in LBAs):                    1310720 (5GiB)
00:06:58.534  Utilization (in LBAs):                 1310720 (5GiB)
00:06:58.534  Thin Provisioning:                     Not Supported
00:06:58.534  Per-NS Atomic Units:                   No
00:06:58.534  Maximum Single Source Range Length:    128
00:06:58.534  Maximum Copy Length:                   128
00:06:58.534  Maximum Source Range Count:            128
00:06:58.534  NGUID/EUI64 Never Reused:              No
00:06:58.534  Namespace Write Protected:             No
00:06:58.534  Number of LBA Formats:                 8
00:06:58.534  Current LBA Format:                    LBA Format #04
00:06:58.534  LBA Format #00: Data Size:   512  Metadata Size:     0
00:06:58.534  LBA Format #01: Data Size:   512  Metadata Size:     8
00:06:58.534  LBA Format #02: Data Size:   512  Metadata Size:    16
00:06:58.534  LBA Format #03: Data Size:   512  Metadata Size:    64
00:06:58.534  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:06:58.534  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:06:58.534  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:06:58.534  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:06:58.534  
00:06:58.534  NVM Specific Namespace Data
00:06:58.535  ===========================
00:06:58.535  Logical Block Storage Tag Mask:               0
00:06:58.535  Protection Information Capabilities:
00:06:58.535    16b Guard Protection Information Storage Tag Support:  No
00:06:58.535    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:06:58.535    Storage Tag Check Read Support:                        No
00:06:58.535  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format[2024-11-20 07:51:14.463666] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 60970 terminated unexpected
00:06:58.535  : 16b Guard PI
00:06:58.535  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.535  =====================================================
00:06:58.535  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:06:58.535  =====================================================
00:06:58.535  Controller Capabilities/Features
00:06:58.535  ================================
00:06:58.535  Vendor ID:                             1b36
00:06:58.535  Subsystem Vendor ID:                   1af4
00:06:58.535  Serial Number:                         12341
00:06:58.535  Model Number:                          QEMU NVMe Ctrl
00:06:58.535  Firmware Version:                      8.0.0
00:06:58.535  Recommended Arb Burst:                 6
00:06:58.535  IEEE OUI Identifier:                   00 54 52
00:06:58.535  Multi-path I/O
00:06:58.535    May have multiple subsystem ports:   No
00:06:58.535    May have multiple controllers:       No
00:06:58.535    Associated with SR-IOV VF:           No
00:06:58.535  Max Data Transfer Size:                524288
00:06:58.535  Max Number of Namespaces:              256
00:06:58.535  Max Number of I/O Queues:              64
00:06:58.535  NVMe Specification Version (VS):       1.4
00:06:58.535  NVMe Specification Version (Identify): 1.4
00:06:58.535  Maximum Queue Entries:                 2048
00:06:58.535  Contiguous Queues Required:            Yes
00:06:58.535  Arbitration Mechanisms Supported
00:06:58.535    Weighted Round Robin:                Not Supported
00:06:58.535    Vendor Specific:                     Not Supported
00:06:58.535  Reset Timeout:                         7500 ms
00:06:58.535  Doorbell Stride:                       4 bytes
00:06:58.535  NVM Subsystem Reset:                   Not Supported
00:06:58.535  Command Sets Supported
00:06:58.535    NVM Command Set:                     Supported
00:06:58.535  Boot Partition:                        Not Supported
00:06:58.535  Memory Page Size Minimum:              4096 bytes
00:06:58.535  Memory Page Size Maximum:              65536 bytes
00:06:58.535  Persistent Memory Region:              Supported
00:06:58.535  Optional Asynchronous Events Supported
00:06:58.535    Namespace Attribute Notices:         Supported
00:06:58.535    Firmware Activation Notices:         Not Supported
00:06:58.535    ANA Change Notices:                  Not Supported
00:06:58.535    PLE Aggregate Log Change Notices:    Not Supported
00:06:58.535    LBA Status Info Alert Notices:       Not Supported
00:06:58.535    EGE Aggregate Log Change Notices:    Not Supported
00:06:58.535    Normal NVM Subsystem Shutdown event: Not Supported
00:06:58.535    Zone Descriptor Change Notices:      Not Supported
00:06:58.535    Discovery Log Change Notices:        Not Supported
00:06:58.535  Controller Attributes
00:06:58.535    128-bit Host Identifier:             Not Supported
00:06:58.535    Non-Operational Permissive Mode:     Not Supported
00:06:58.535    NVM Sets:                            Not Supported
00:06:58.535    Read Recovery Levels:                Not Supported
00:06:58.535    Endurance Groups:                    Not Supported
00:06:58.535    Predictable Latency Mode:            Not Supported
00:06:58.535    Traffic Based Keep ALive:            Not Supported
00:06:58.535    Namespace Granularity:               Not Supported
00:06:58.535    SQ Associations:                     Not Supported
00:06:58.535    UUID List:                           Not Supported
00:06:58.535    Multi-Domain Subsystem:              Not Supported
00:06:58.535    Fixed Capacity Management:           Not Supported
00:06:58.535    Variable Capacity Management:        Not Supported
00:06:58.535    Delete Endurance Group:              Not Supported
00:06:58.535    Delete NVM Set:                      Not Supported
00:06:58.535    Extended LBA Formats Supported:      Supported
00:06:58.535    Flexible Data Placement Supported:   Not Supported
00:06:58.535  
00:06:58.535  Controller Memory Buffer Support
00:06:58.535  ================================
00:06:58.535  Supported:                             Yes
00:06:58.535  Total Size:                            134217728 bytes
00:06:58.535  Submission Queues in CMB:              Supported
00:06:58.535  Completion Queues in CMB:              Not Supported
00:06:58.535  Read data and metadata in CMB          Supported
00:06:58.535  Write data and metadata in CMB:        Supported
00:06:58.535  
00:06:58.535  Persistent Memory Region Support
00:06:58.535  ================================
00:06:58.535  Supported:                             Yes
00:06:58.535  Total Size:                            33554432 bytes
00:06:58.535  Read data and metadata in PMR          Supported
00:06:58.535  Write data and metadata in PMR:        Supported
00:06:58.535  
00:06:58.535  Admin Command Set Attributes
00:06:58.535  ============================
00:06:58.535  Security Send/Receive:                 Not Supported
00:06:58.535  Format NVM:                            Supported
00:06:58.535  Firmware Activate/Download:            Not Supported
00:06:58.535  Namespace Management:                  Supported
00:06:58.535  Device Self-Test:                      Not Supported
00:06:58.535  Directives:                            Supported
00:06:58.535  NVMe-MI:                               Not Supported
00:06:58.535  Virtualization Management:             Not Supported
00:06:58.535  Doorbell Buffer Config:                Supported
00:06:58.535  Get LBA Status Capability:             Not Supported
00:06:58.535  Command & Feature Lockdown Capability: Not Supported
00:06:58.535  Abort Command Limit:                   4
00:06:58.535  Async Event Request Limit:             4
00:06:58.535  Number of Firmware Slots:              N/A
00:06:58.535  Firmware Slot 1 Read-Only:             N/A
00:06:58.535  Firmware Activation Without Reset:     N/A
00:06:58.535  Multiple Update Detection Support:     N/A
00:06:58.535  Firmware Update Granularity:           No Information Provided
00:06:58.535  Per-Namespace SMART Log:               Yes
00:06:58.535  Asymmetric Namespace Access Log Page:  Not Supported
00:06:58.535  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:06:58.535  Command Effects Log Page:              Supported
00:06:58.535  Get Log Page Extended Data:            Supported
00:06:58.535  Telemetry Log Pages:                   Not Supported
00:06:58.535  Persistent Event Log Pages:            Not Supported
00:06:58.535  Supported Log Pages Log Page:          May Support
00:06:58.535  Commands Supported & Effects Log Page: Not Supported
00:06:58.535  Feature Identifiers & Effects Log Page:May Support
00:06:58.535  NVMe-MI Commands & Effects Log Page:   May Support
00:06:58.535  Data Area 4 for Telemetry Log:         Not Supported
00:06:58.535  Error Log Page Entries Supported:      1
00:06:58.535  Keep Alive:                            Not Supported
00:06:58.535  
00:06:58.535  NVM Command Set Attributes
00:06:58.535  ==========================
00:06:58.535  Submission Queue Entry Size
00:06:58.535    Max:                       64
00:06:58.535    Min:                       64
00:06:58.535  Completion Queue Entry Size
00:06:58.535    Max:                       16
00:06:58.535    Min:                       16
00:06:58.535  Number of Namespaces:        256
00:06:58.535  Compare Command:             Supported
00:06:58.535  Write Uncorrectable Command: Not Supported
00:06:58.535  Dataset Management Command:  Supported
00:06:58.535  Write Zeroes Command:        Supported
00:06:58.535  Set Features Save Field:     Supported
00:06:58.535  Reservations:                Not Supported
00:06:58.535  Timestamp:                   Supported
00:06:58.535  Copy:                        Supported
00:06:58.535  Volatile Write Cache:        Present
00:06:58.535  Atomic Write Unit (Normal):  1
00:06:58.535  Atomic Write Unit (PFail):   1
00:06:58.535  Atomic Compare & Write Unit: 1
00:06:58.535  Fused Compare & Write:       Not Supported
00:06:58.535  Scatter-Gather List
00:06:58.535    SGL Command Set:           Supported
00:06:58.535    SGL Keyed:                 Not Supported
00:06:58.535    SGL Bit Bucket Descriptor: Not Supported
00:06:58.535    SGL Metadata Pointer:      Not Supported
00:06:58.535    Oversized SGL:             Not Supported
00:06:58.535    SGL Metadata Address:      Not Supported
00:06:58.535    SGL Offset:                Not Supported
00:06:58.535    Transport SGL Data Block:  Not Supported
00:06:58.535  Replay Protected Memory Block:  Not Supported
00:06:58.535  
00:06:58.535  Firmware Slot Information
00:06:58.535  =========================
00:06:58.535  Active slot:                 1
00:06:58.535  Slot 1 Firmware Revision:    1.0
00:06:58.535  
00:06:58.535  
00:06:58.535  Commands Supported and Effects
00:06:58.535  ==============================
00:06:58.535  Admin Commands
00:06:58.535  --------------
00:06:58.535     Delete I/O Submission Queue (00h): Supported 
00:06:58.535     Create I/O Submission Queue (01h): Supported 
00:06:58.535                    Get Log Page (02h): Supported 
00:06:58.535     Delete I/O Completion Queue (04h): Supported 
00:06:58.535     Create I/O Completion Queue (05h): Supported 
00:06:58.535                        Identify (06h): Supported 
00:06:58.535                           Abort (08h): Supported 
00:06:58.535                    Set Features (09h): Supported 
00:06:58.535                    Get Features (0Ah): Supported 
00:06:58.535      Asynchronous Event Request (0Ch): Supported 
00:06:58.535            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:06:58.535                  Directive Send (19h): Supported 
00:06:58.535               Directive Receive (1Ah): Supported 
00:06:58.535       Virtualization Management (1Ch): Supported 
00:06:58.535          Doorbell Buffer Config (7Ch): Supported 
00:06:58.535                      Format NVM (80h): Supported LBA-Change 
00:06:58.536  I/O Commands
00:06:58.536  ------------
00:06:58.536                           Flush (00h): Supported LBA-Change 
00:06:58.536                           Write (01h): Supported LBA-Change 
00:06:58.536                            Read (02h): Supported 
00:06:58.536                         Compare (05h): Supported 
00:06:58.536                    Write Zeroes (08h): Supported LBA-Change 
00:06:58.536              Dataset Management (09h): Supported LBA-Change 
00:06:58.536                         Unknown (0Ch): Supported 
00:06:58.536                         Unknown (12h): Supported 
00:06:58.536                            Copy (19h): Supported LBA-Change 
00:06:58.536                         Unknown (1Dh): Supported LBA-Change 
00:06:58.536  
00:06:58.536  Error Log
00:06:58.536  =========
00:06:58.536  
00:06:58.536  Arbitration
00:06:58.536  ===========
00:06:58.536  Arbitration Burst:           no limit
00:06:58.536  
00:06:58.536  Power Management
00:06:58.536  ================
00:06:58.536  Number of Power States:          1
00:06:58.536  Current Power State:             Power State #0
00:06:58.536  Power State #0:
00:06:58.536    Max Power:                     25.00 W
00:06:58.536    Non-Operational State:         Operational
00:06:58.536    Entry Latency:                 16 microseconds
00:06:58.536    Exit Latency:                  4 microseconds
00:06:58.536    Relative Read Throughput:      0
00:06:58.536    Relative Read Latency:         0
00:06:58.536    Relative Write Throughput:     0
00:06:58.536    Relative Write Latency:        0
00:06:58.536    Idle Power:                     Not Reported
00:06:58.536    Active Power:                   Not Reported
00:06:58.536  Non-Operational Permissive Mode: Not Supported
00:06:58.536  
00:06:58.536  Health Information
00:06:58.536  ==================
00:06:58.536  Critical Warnings:
00:06:58.536    Available Spare Space:     OK
00:06:58.536    Temperature:               OK
00:06:58.536    Device Reliability:        OK
00:06:58.536    Read Only:                 No
00:06:58.536    Volatile Memory Backup:    OK
00:06:58.536  Current Temperature:         323 Kelvin (50 Celsius)
00:06:58.536  Temperature Threshold:       343 Kelvin (70 Celsius)
00:06:58.536  Available Spare:             0%
00:06:58.536  Available Spare Threshold:   0%
00:06:58.536  Life Percentage Used:        0%
00:06:58.536  Data Units Read:             3886
00:06:58.536  Data Units Written:          3760
00:06:58.536  Host Read Commands:          203475
00:06:58.536  Host Write Commands:         202364
00:06:58.536  Controller Busy Time:        0 minutes
00:06:58.536  Power Cycles:                0
00:06:58.536  Power On Hours:              0 hours
00:06:58.536  Unsafe Shutdowns:            0
00:06:58.536  Unrecoverable Media Errors:  0
00:06:58.536  Lifetime Error Log Entries:  0
00:06:58.536  Warning Temperature Time:    0 minutes
00:06:58.536  Critical Temperature Time:   0 minutes
00:06:58.536  
00:06:58.536  Number of Queues
00:06:58.536  ================
00:06:58.536  Number of I/O Submission Queues:      64
00:06:58.536  Number of I/O Completion Queues:      64
00:06:58.536  
00:06:58.536  ZNS Specific Controller Data
00:06:58.536  ============================
00:06:58.536  Zone Append Size Limit:      0
00:06:58.536  
00:06:58.536  
00:06:58.536  Active Namespaces
00:06:58.536  =================
00:06:58.536  Namespace ID:1
00:06:58.536  Error Recovery Timeout:                Unlimited
00:06:58.536  Command Set Identifier:                NVM (00h)
00:06:58.536  Deallocate:                            Supported
00:06:58.536  Deallocated/Unwritten Error:           Supported
00:06:58.536  Deallocated Read Value:                All 0x00
00:06:58.536  Deallocate in Write Zeroes:            Not Supported
00:06:58.536  Deallocated Guard Field:               0xFFFF
00:06:58.536  Flush:                                 Supported
00:06:58.536  Reservation:                           Not Supported
00:06:58.536  Namespace Sharing Capabilities:        Private
00:06:58.536  Size (in LBAs):                        1310720 (5GiB)
00:06:58.536  Capacity (in LBAs):                    1310720 (5GiB)
00:06:58.536  Utilization (in LBAs):                 1310720 (5GiB)
00:06:58.536  Thin Provisioning:                     Not Supported
00:06:58.536  Per-NS Atomic Units:                   No
00:06:58.536  Maximum Single Source Range Length:    128
00:06:58.536  Maximum Copy Length:                   128
00:06:58.536  Maximum Source Range Count:            128
00:06:58.536  NGUID/EUI64 Never Reused:              No
00:06:58.536  Namespace Write Protected:             No
00:06:58.536  Number of LBA Formats:                 8
00:06:58.536  Current LBA Format:                    LBA Format #04
00:06:58.536  LBA Format #00: Data Size:   512  Metadata Size:     0
00:06:58.536  LBA Format #01: Data Size:   512  Metadata Size:     8
00:06:58.536  LBA Format #02: Data Size:   512  Metadata Size:    16
00:06:58.536  LBA Format #03: Data Size:   512  Metadata Size:    64
00:06:58.536  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:06:58.536  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:06:58.536  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:06:58.536  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:06:58.536  
00:06:58.536  NVM Specific Namespace Data
00:06:58.536  ===========================
00:06:58.536  Logical Block Storage Tag Mask:               0
00:06:58.536  Protection Information Capabilities:
00:06:58.536    16b Guard Protection Information Storage Tag Support:  No
00:06:58.536    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:06:58.536    Storage Tag Check Read Support:                        No
00:06:58.536  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.536   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:06:58.536   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:06:58.796  =====================================================
00:06:58.796  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:06:58.796  =====================================================
00:06:58.796  Controller Capabilities/Features
00:06:58.796  ================================
00:06:58.796  Vendor ID:                             1b36
00:06:58.796  Subsystem Vendor ID:                   1af4
00:06:58.796  Serial Number:                         12340
00:06:58.796  Model Number:                          QEMU NVMe Ctrl
00:06:58.796  Firmware Version:                      8.0.0
00:06:58.796  Recommended Arb Burst:                 6
00:06:58.796  IEEE OUI Identifier:                   00 54 52
00:06:58.796  Multi-path I/O
00:06:58.796    May have multiple subsystem ports:   No
00:06:58.796    May have multiple controllers:       No
00:06:58.796    Associated with SR-IOV VF:           No
00:06:58.796  Max Data Transfer Size:                524288
00:06:58.796  Max Number of Namespaces:              256
00:06:58.796  Max Number of I/O Queues:              64
00:06:58.796  NVMe Specification Version (VS):       1.4
00:06:58.796  NVMe Specification Version (Identify): 1.4
00:06:58.796  Maximum Queue Entries:                 2048
00:06:58.796  Contiguous Queues Required:            Yes
00:06:58.796  Arbitration Mechanisms Supported
00:06:58.796    Weighted Round Robin:                Not Supported
00:06:58.796    Vendor Specific:                     Not Supported
00:06:58.796  Reset Timeout:                         7500 ms
00:06:58.796  Doorbell Stride:                       4 bytes
00:06:58.796  NVM Subsystem Reset:                   Not Supported
00:06:58.796  Command Sets Supported
00:06:58.796    NVM Command Set:                     Supported
00:06:58.796  Boot Partition:                        Not Supported
00:06:58.796  Memory Page Size Minimum:              4096 bytes
00:06:58.796  Memory Page Size Maximum:              65536 bytes
00:06:58.796  Persistent Memory Region:              Supported
00:06:58.796  Optional Asynchronous Events Supported
00:06:58.796    Namespace Attribute Notices:         Supported
00:06:58.796    Firmware Activation Notices:         Not Supported
00:06:58.796    ANA Change Notices:                  Not Supported
00:06:58.796    PLE Aggregate Log Change Notices:    Not Supported
00:06:58.796    LBA Status Info Alert Notices:       Not Supported
00:06:58.796    EGE Aggregate Log Change Notices:    Not Supported
00:06:58.796    Normal NVM Subsystem Shutdown event: Not Supported
00:06:58.796    Zone Descriptor Change Notices:      Not Supported
00:06:58.796    Discovery Log Change Notices:        Not Supported
00:06:58.796  Controller Attributes
00:06:58.796    128-bit Host Identifier:             Not Supported
00:06:58.796    Non-Operational Permissive Mode:     Not Supported
00:06:58.796    NVM Sets:                            Not Supported
00:06:58.796    Read Recovery Levels:                Not Supported
00:06:58.796    Endurance Groups:                    Not Supported
00:06:58.796    Predictable Latency Mode:            Not Supported
00:06:58.796    Traffic Based Keep ALive:            Not Supported
00:06:58.796    Namespace Granularity:               Not Supported
00:06:58.796    SQ Associations:                     Not Supported
00:06:58.796    UUID List:                           Not Supported
00:06:58.796    Multi-Domain Subsystem:              Not Supported
00:06:58.796    Fixed Capacity Management:           Not Supported
00:06:58.796    Variable Capacity Management:        Not Supported
00:06:58.796    Delete Endurance Group:              Not Supported
00:06:58.796    Delete NVM Set:                      Not Supported
00:06:58.796    Extended LBA Formats Supported:      Supported
00:06:58.796    Flexible Data Placement Supported:   Not Supported
00:06:58.796  
00:06:58.796  Controller Memory Buffer Support
00:06:58.796  ================================
00:06:58.796  Supported:                             Yes
00:06:58.796  Total Size:                            134217728 bytes
00:06:58.796  Submission Queues in CMB:              Supported
00:06:58.796  Completion Queues in CMB:              Not Supported
00:06:58.796  Read data and metadata in CMB          Supported
00:06:58.796  Write data and metadata in CMB:        Supported
00:06:58.796  
00:06:58.796  Persistent Memory Region Support
00:06:58.796  ================================
00:06:58.796  Supported:                             Yes
00:06:58.796  Total Size:                            33554432 bytes
00:06:58.796  Read data and metadata in PMR          Supported
00:06:58.796  Write data and metadata in PMR:        Supported
00:06:58.796  
00:06:58.796  Admin Command Set Attributes
00:06:58.796  ============================
00:06:58.796  Security Send/Receive:                 Not Supported
00:06:58.796  Format NVM:                            Supported
00:06:58.796  Firmware Activate/Download:            Not Supported
00:06:58.796  Namespace Management:                  Supported
00:06:58.796  Device Self-Test:                      Not Supported
00:06:58.796  Directives:                            Supported
00:06:58.796  NVMe-MI:                               Not Supported
00:06:58.796  Virtualization Management:             Not Supported
00:06:58.796  Doorbell Buffer Config:                Supported
00:06:58.796  Get LBA Status Capability:             Not Supported
00:06:58.796  Command & Feature Lockdown Capability: Not Supported
00:06:58.796  Abort Command Limit:                   4
00:06:58.796  Async Event Request Limit:             4
00:06:58.796  Number of Firmware Slots:              N/A
00:06:58.796  Firmware Slot 1 Read-Only:             N/A
00:06:58.796  Firmware Activation Without Reset:     N/A
00:06:58.796  Multiple Update Detection Support:     N/A
00:06:58.797  Firmware Update Granularity:           No Information Provided
00:06:58.797  Per-Namespace SMART Log:               Yes
00:06:58.797  Asymmetric Namespace Access Log Page:  Not Supported
00:06:58.797  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:06:58.797  Command Effects Log Page:              Supported
00:06:58.797  Get Log Page Extended Data:            Supported
00:06:58.797  Telemetry Log Pages:                   Not Supported
00:06:58.797  Persistent Event Log Pages:            Not Supported
00:06:58.797  Supported Log Pages Log Page:          May Support
00:06:58.797  Commands Supported & Effects Log Page: Not Supported
00:06:58.797  Feature Identifiers & Effects Log Page:May Support
00:06:58.797  NVMe-MI Commands & Effects Log Page:   May Support
00:06:58.797  Data Area 4 for Telemetry Log:         Not Supported
00:06:58.797  Error Log Page Entries Supported:      1
00:06:58.797  Keep Alive:                            Not Supported
00:06:58.797  
00:06:58.797  NVM Command Set Attributes
00:06:58.797  ==========================
00:06:58.797  Submission Queue Entry Size
00:06:58.797    Max:                       64
00:06:58.797    Min:                       64
00:06:58.797  Completion Queue Entry Size
00:06:58.797    Max:                       16
00:06:58.797    Min:                       16
00:06:58.797  Number of Namespaces:        256
00:06:58.797  Compare Command:             Supported
00:06:58.797  Write Uncorrectable Command: Not Supported
00:06:58.797  Dataset Management Command:  Supported
00:06:58.797  Write Zeroes Command:        Supported
00:06:58.797  Set Features Save Field:     Supported
00:06:58.797  Reservations:                Not Supported
00:06:58.797  Timestamp:                   Supported
00:06:58.797  Copy:                        Supported
00:06:58.797  Volatile Write Cache:        Present
00:06:58.797  Atomic Write Unit (Normal):  1
00:06:58.797  Atomic Write Unit (PFail):   1
00:06:58.797  Atomic Compare & Write Unit: 1
00:06:58.797  Fused Compare & Write:       Not Supported
00:06:58.797  Scatter-Gather List
00:06:58.797    SGL Command Set:           Supported
00:06:58.797    SGL Keyed:                 Not Supported
00:06:58.797    SGL Bit Bucket Descriptor: Not Supported
00:06:58.797    SGL Metadata Pointer:      Not Supported
00:06:58.797    Oversized SGL:             Not Supported
00:06:58.797    SGL Metadata Address:      Not Supported
00:06:58.797    SGL Offset:                Not Supported
00:06:58.797    Transport SGL Data Block:  Not Supported
00:06:58.797  Replay Protected Memory Block:  Not Supported
00:06:58.797  
00:06:58.797  Firmware Slot Information
00:06:58.797  =========================
00:06:58.797  Active slot:                 1
00:06:58.797  Slot 1 Firmware Revision:    1.0
00:06:58.797  
00:06:58.797  
00:06:58.797  Commands Supported and Effects
00:06:58.797  ==============================
00:06:58.797  Admin Commands
00:06:58.797  --------------
00:06:58.797     Delete I/O Submission Queue (00h): Supported 
00:06:58.797     Create I/O Submission Queue (01h): Supported 
00:06:58.797                    Get Log Page (02h): Supported 
00:06:58.797     Delete I/O Completion Queue (04h): Supported 
00:06:58.797     Create I/O Completion Queue (05h): Supported 
00:06:58.797                        Identify (06h): Supported 
00:06:58.797                           Abort (08h): Supported 
00:06:58.797                    Set Features (09h): Supported 
00:06:58.797                    Get Features (0Ah): Supported 
00:06:58.797      Asynchronous Event Request (0Ch): Supported 
00:06:58.797            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:06:58.797                  Directive Send (19h): Supported 
00:06:58.797               Directive Receive (1Ah): Supported 
00:06:58.797       Virtualization Management (1Ch): Supported 
00:06:58.797          Doorbell Buffer Config (7Ch): Supported 
00:06:58.797                      Format NVM (80h): Supported LBA-Change 
00:06:58.797  I/O Commands
00:06:58.797  ------------
00:06:58.797                           Flush (00h): Supported LBA-Change 
00:06:58.797                           Write (01h): Supported LBA-Change 
00:06:58.797                            Read (02h): Supported 
00:06:58.797                         Compare (05h): Supported 
00:06:58.797                    Write Zeroes (08h): Supported LBA-Change 
00:06:58.797              Dataset Management (09h): Supported LBA-Change 
00:06:58.797                         Unknown (0Ch): Supported 
00:06:58.797                         Unknown (12h): Supported 
00:06:58.797                            Copy (19h): Supported LBA-Change 
00:06:58.797                         Unknown (1Dh): Supported LBA-Change 
00:06:58.797  
00:06:58.797  Error Log
00:06:58.797  =========
00:06:58.797  
00:06:58.797  Arbitration
00:06:58.797  ===========
00:06:58.797  Arbitration Burst:           no limit
00:06:58.797  
00:06:58.797  Power Management
00:06:58.797  ================
00:06:58.797  Number of Power States:          1
00:06:58.797  Current Power State:             Power State #0
00:06:58.797  Power State #0:
00:06:58.797    Max Power:                     25.00 W
00:06:58.797    Non-Operational State:         Operational
00:06:58.797    Entry Latency:                 16 microseconds
00:06:58.797    Exit Latency:                  4 microseconds
00:06:58.797    Relative Read Throughput:      0
00:06:58.797    Relative Read Latency:         0
00:06:58.797    Relative Write Throughput:     0
00:06:58.797    Relative Write Latency:        0
00:06:58.797    Idle Power:                     Not Reported
00:06:58.797    Active Power:                   Not Reported
00:06:58.797  Non-Operational Permissive Mode: Not Supported
00:06:58.797  
00:06:58.797  Health Information
00:06:58.797  ==================
00:06:58.797  Critical Warnings:
00:06:58.797    Available Spare Space:     OK
00:06:58.797    Temperature:               OK
00:06:58.797    Device Reliability:        OK
00:06:58.797    Read Only:                 No
00:06:58.797    Volatile Memory Backup:    OK
00:06:58.797  Current Temperature:         323 Kelvin (50 Celsius)
00:06:58.797  Temperature Threshold:       343 Kelvin (70 Celsius)
00:06:58.797  Available Spare:             0%
00:06:58.797  Available Spare Threshold:   0%
00:06:58.797  Life Percentage Used:        0%
00:06:58.797  Data Units Read:             2721
00:06:58.797  Data Units Written:          2650
00:06:58.797  Host Read Commands:          144997
00:06:58.797  Host Write Commands:         144421
00:06:58.797  Controller Busy Time:        0 minutes
00:06:58.797  Power Cycles:                0
00:06:58.797  Power On Hours:              0 hours
00:06:58.797  Unsafe Shutdowns:            0
00:06:58.797  Unrecoverable Media Errors:  0
00:06:58.797  Lifetime Error Log Entries:  0
00:06:58.797  Warning Temperature Time:    0 minutes
00:06:58.797  Critical Temperature Time:   0 minutes
00:06:58.797  
00:06:58.797  Number of Queues
00:06:58.797  ================
00:06:58.797  Number of I/O Submission Queues:      64
00:06:58.797  Number of I/O Completion Queues:      64
00:06:58.797  
00:06:58.797  ZNS Specific Controller Data
00:06:58.797  ============================
00:06:58.797  Zone Append Size Limit:      0
00:06:58.797  
00:06:58.797  
00:06:58.797  Active Namespaces
00:06:58.797  =================
00:06:58.797  Namespace ID:1
00:06:58.797  Error Recovery Timeout:                Unlimited
00:06:58.797  Command Set Identifier:                NVM (00h)
00:06:58.797  Deallocate:                            Supported
00:06:58.797  Deallocated/Unwritten Error:           Supported
00:06:58.797  Deallocated Read Value:                All 0x00
00:06:58.797  Deallocate in Write Zeroes:            Not Supported
00:06:58.797  Deallocated Guard Field:               0xFFFF
00:06:58.797  Flush:                                 Supported
00:06:58.797  Reservation:                           Not Supported
00:06:58.797  Namespace Sharing Capabilities:        Private
00:06:58.797  Size (in LBAs):                        1310720 (5GiB)
00:06:58.797  Capacity (in LBAs):                    1310720 (5GiB)
00:06:58.797  Utilization (in LBAs):                 1310720 (5GiB)
00:06:58.797  Thin Provisioning:                     Not Supported
00:06:58.797  Per-NS Atomic Units:                   No
00:06:58.797  Maximum Single Source Range Length:    128
00:06:58.797  Maximum Copy Length:                   128
00:06:58.797  Maximum Source Range Count:            128
00:06:58.797  NGUID/EUI64 Never Reused:              No
00:06:58.797  Namespace Write Protected:             No
00:06:58.797  Number of LBA Formats:                 8
00:06:58.797  Current LBA Format:                    LBA Format #04
00:06:58.797  LBA Format #00: Data Size:   512  Metadata Size:     0
00:06:58.797  LBA Format #01: Data Size:   512  Metadata Size:     8
00:06:58.797  LBA Format #02: Data Size:   512  Metadata Size:    16
00:06:58.797  LBA Format #03: Data Size:   512  Metadata Size:    64
00:06:58.797  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:06:58.797  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:06:58.797  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:06:58.797  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:06:58.797  
00:06:58.797  NVM Specific Namespace Data
00:06:58.797  ===========================
00:06:58.797  Logical Block Storage Tag Mask:               0
00:06:58.797  Protection Information Capabilities:
00:06:58.797    16b Guard Protection Information Storage Tag Support:  No
00:06:58.797    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:06:58.797    Storage Tag Check Read Support:                        No
00:06:58.797  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.797  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:58.798   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:06:58.798   07:51:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0
00:06:59.057  =====================================================
00:06:59.057  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:06:59.057  =====================================================
00:06:59.057  Controller Capabilities/Features
00:06:59.057  ================================
00:06:59.057  Vendor ID:                             1b36
00:06:59.057  Subsystem Vendor ID:                   1af4
00:06:59.057  Serial Number:                         12341
00:06:59.057  Model Number:                          QEMU NVMe Ctrl
00:06:59.057  Firmware Version:                      8.0.0
00:06:59.057  Recommended Arb Burst:                 6
00:06:59.057  IEEE OUI Identifier:                   00 54 52
00:06:59.057  Multi-path I/O
00:06:59.057    May have multiple subsystem ports:   No
00:06:59.057    May have multiple controllers:       No
00:06:59.057    Associated with SR-IOV VF:           No
00:06:59.057  Max Data Transfer Size:                524288
00:06:59.057  Max Number of Namespaces:              256
00:06:59.057  Max Number of I/O Queues:              64
00:06:59.057  NVMe Specification Version (VS):       1.4
00:06:59.057  NVMe Specification Version (Identify): 1.4
00:06:59.057  Maximum Queue Entries:                 2048
00:06:59.057  Contiguous Queues Required:            Yes
00:06:59.057  Arbitration Mechanisms Supported
00:06:59.057    Weighted Round Robin:                Not Supported
00:06:59.057    Vendor Specific:                     Not Supported
00:06:59.058  Reset Timeout:                         7500 ms
00:06:59.058  Doorbell Stride:                       4 bytes
00:06:59.058  NVM Subsystem Reset:                   Not Supported
00:06:59.058  Command Sets Supported
00:06:59.058    NVM Command Set:                     Supported
00:06:59.058  Boot Partition:                        Not Supported
00:06:59.058  Memory Page Size Minimum:              4096 bytes
00:06:59.058  Memory Page Size Maximum:              65536 bytes
00:06:59.058  Persistent Memory Region:              Supported
00:06:59.058  Optional Asynchronous Events Supported
00:06:59.058    Namespace Attribute Notices:         Supported
00:06:59.058    Firmware Activation Notices:         Not Supported
00:06:59.058    ANA Change Notices:                  Not Supported
00:06:59.058    PLE Aggregate Log Change Notices:    Not Supported
00:06:59.058    LBA Status Info Alert Notices:       Not Supported
00:06:59.058    EGE Aggregate Log Change Notices:    Not Supported
00:06:59.058    Normal NVM Subsystem Shutdown event: Not Supported
00:06:59.058    Zone Descriptor Change Notices:      Not Supported
00:06:59.058    Discovery Log Change Notices:        Not Supported
00:06:59.058  Controller Attributes
00:06:59.058    128-bit Host Identifier:             Not Supported
00:06:59.058    Non-Operational Permissive Mode:     Not Supported
00:06:59.058    NVM Sets:                            Not Supported
00:06:59.058    Read Recovery Levels:                Not Supported
00:06:59.058    Endurance Groups:                    Not Supported
00:06:59.058    Predictable Latency Mode:            Not Supported
00:06:59.058    Traffic Based Keep ALive:            Not Supported
00:06:59.058    Namespace Granularity:               Not Supported
00:06:59.058    SQ Associations:                     Not Supported
00:06:59.058    UUID List:                           Not Supported
00:06:59.058    Multi-Domain Subsystem:              Not Supported
00:06:59.058    Fixed Capacity Management:           Not Supported
00:06:59.058    Variable Capacity Management:        Not Supported
00:06:59.058    Delete Endurance Group:              Not Supported
00:06:59.058    Delete NVM Set:                      Not Supported
00:06:59.058    Extended LBA Formats Supported:      Supported
00:06:59.058    Flexible Data Placement Supported:   Not Supported
00:06:59.058  
00:06:59.058  Controller Memory Buffer Support
00:06:59.058  ================================
00:06:59.058  Supported:                             Yes
00:06:59.058  Total Size:                            134217728 bytes
00:06:59.058  Submission Queues in CMB:              Supported
00:06:59.058  Completion Queues in CMB:              Not Supported
00:06:59.058  Read data and metadata in CMB          Supported
00:06:59.058  Write data and metadata in CMB:        Supported
00:06:59.058  
00:06:59.058  Persistent Memory Region Support
00:06:59.058  ================================
00:06:59.058  Supported:                             Yes
00:06:59.058  Total Size:                            33554432 bytes
00:06:59.058  Read data and metadata in PMR          Supported
00:06:59.058  Write data and metadata in PMR:        Supported
00:06:59.058  
00:06:59.058  Admin Command Set Attributes
00:06:59.058  ============================
00:06:59.058  Security Send/Receive:                 Not Supported
00:06:59.058  Format NVM:                            Supported
00:06:59.058  Firmware Activate/Download:            Not Supported
00:06:59.058  Namespace Management:                  Supported
00:06:59.058  Device Self-Test:                      Not Supported
00:06:59.058  Directives:                            Supported
00:06:59.058  NVMe-MI:                               Not Supported
00:06:59.058  Virtualization Management:             Not Supported
00:06:59.058  Doorbell Buffer Config:                Supported
00:06:59.058  Get LBA Status Capability:             Not Supported
00:06:59.058  Command & Feature Lockdown Capability: Not Supported
00:06:59.058  Abort Command Limit:                   4
00:06:59.058  Async Event Request Limit:             4
00:06:59.058  Number of Firmware Slots:              N/A
00:06:59.058  Firmware Slot 1 Read-Only:             N/A
00:06:59.058  Firmware Activation Without Reset:     N/A
00:06:59.058  Multiple Update Detection Support:     N/A
00:06:59.058  Firmware Update Granularity:           No Information Provided
00:06:59.058  Per-Namespace SMART Log:               Yes
00:06:59.058  Asymmetric Namespace Access Log Page:  Not Supported
00:06:59.058  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:06:59.058  Command Effects Log Page:              Supported
00:06:59.058  Get Log Page Extended Data:            Supported
00:06:59.058  Telemetry Log Pages:                   Not Supported
00:06:59.058  Persistent Event Log Pages:            Not Supported
00:06:59.058  Supported Log Pages Log Page:          May Support
00:06:59.058  Commands Supported & Effects Log Page: Not Supported
00:06:59.058  Feature Identifiers & Effects Log Page:May Support
00:06:59.058  NVMe-MI Commands & Effects Log Page:   May Support
00:06:59.058  Data Area 4 for Telemetry Log:         Not Supported
00:06:59.058  Error Log Page Entries Supported:      1
00:06:59.058  Keep Alive:                            Not Supported
00:06:59.058  
00:06:59.058  NVM Command Set Attributes
00:06:59.058  ==========================
00:06:59.058  Submission Queue Entry Size
00:06:59.058    Max:                       64
00:06:59.058    Min:                       64
00:06:59.058  Completion Queue Entry Size
00:06:59.058    Max:                       16
00:06:59.058    Min:                       16
00:06:59.058  Number of Namespaces:        256
00:06:59.058  Compare Command:             Supported
00:06:59.058  Write Uncorrectable Command: Not Supported
00:06:59.058  Dataset Management Command:  Supported
00:06:59.058  Write Zeroes Command:        Supported
00:06:59.058  Set Features Save Field:     Supported
00:06:59.058  Reservations:                Not Supported
00:06:59.058  Timestamp:                   Supported
00:06:59.058  Copy:                        Supported
00:06:59.058  Volatile Write Cache:        Present
00:06:59.058  Atomic Write Unit (Normal):  1
00:06:59.058  Atomic Write Unit (PFail):   1
00:06:59.058  Atomic Compare & Write Unit: 1
00:06:59.058  Fused Compare & Write:       Not Supported
00:06:59.058  Scatter-Gather List
00:06:59.058    SGL Command Set:           Supported
00:06:59.058    SGL Keyed:                 Not Supported
00:06:59.058    SGL Bit Bucket Descriptor: Not Supported
00:06:59.058    SGL Metadata Pointer:      Not Supported
00:06:59.058    Oversized SGL:             Not Supported
00:06:59.058    SGL Metadata Address:      Not Supported
00:06:59.058    SGL Offset:                Not Supported
00:06:59.058    Transport SGL Data Block:  Not Supported
00:06:59.058  Replay Protected Memory Block:  Not Supported
00:06:59.058  
00:06:59.058  Firmware Slot Information
00:06:59.058  =========================
00:06:59.058  Active slot:                 1
00:06:59.058  Slot 1 Firmware Revision:    1.0
00:06:59.058  
00:06:59.058  
00:06:59.058  Commands Supported and Effects
00:06:59.058  ==============================
00:06:59.058  Admin Commands
00:06:59.058  --------------
00:06:59.058     Delete I/O Submission Queue (00h): Supported 
00:06:59.058     Create I/O Submission Queue (01h): Supported 
00:06:59.058                    Get Log Page (02h): Supported 
00:06:59.058     Delete I/O Completion Queue (04h): Supported 
00:06:59.058     Create I/O Completion Queue (05h): Supported 
00:06:59.058                        Identify (06h): Supported 
00:06:59.058                           Abort (08h): Supported 
00:06:59.058                    Set Features (09h): Supported 
00:06:59.058                    Get Features (0Ah): Supported 
00:06:59.058      Asynchronous Event Request (0Ch): Supported 
00:06:59.058            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:06:59.058                  Directive Send (19h): Supported 
00:06:59.058               Directive Receive (1Ah): Supported 
00:06:59.058       Virtualization Management (1Ch): Supported 
00:06:59.058          Doorbell Buffer Config (7Ch): Supported 
00:06:59.058                      Format NVM (80h): Supported LBA-Change 
00:06:59.058  I/O Commands
00:06:59.058  ------------
00:06:59.058                           Flush (00h): Supported LBA-Change 
00:06:59.058                           Write (01h): Supported LBA-Change 
00:06:59.058                            Read (02h): Supported 
00:06:59.058                         Compare (05h): Supported 
00:06:59.058                    Write Zeroes (08h): Supported LBA-Change 
00:06:59.058              Dataset Management (09h): Supported LBA-Change 
00:06:59.058                         Unknown (0Ch): Supported 
00:06:59.058                         Unknown (12h): Supported 
00:06:59.058                            Copy (19h): Supported LBA-Change 
00:06:59.058                         Unknown (1Dh): Supported LBA-Change 
00:06:59.058  
00:06:59.058  Error Log
00:06:59.058  =========
00:06:59.058  
00:06:59.058  Arbitration
00:06:59.058  ===========
00:06:59.058  Arbitration Burst:           no limit
00:06:59.058  
00:06:59.058  Power Management
00:06:59.058  ================
00:06:59.058  Number of Power States:          1
00:06:59.058  Current Power State:             Power State #0
00:06:59.058  Power State #0:
00:06:59.058    Max Power:                     25.00 W
00:06:59.058    Non-Operational State:         Operational
00:06:59.058    Entry Latency:                 16 microseconds
00:06:59.058    Exit Latency:                  4 microseconds
00:06:59.058    Relative Read Throughput:      0
00:06:59.058    Relative Read Latency:         0
00:06:59.058    Relative Write Throughput:     0
00:06:59.058    Relative Write Latency:        0
00:06:59.058    Idle Power:                     Not Reported
00:06:59.058    Active Power:                   Not Reported
00:06:59.058  Non-Operational Permissive Mode: Not Supported
00:06:59.058  
00:06:59.058  Health Information
00:06:59.058  ==================
00:06:59.058  Critical Warnings:
00:06:59.058    Available Spare Space:     OK
00:06:59.058    Temperature:               OK
00:06:59.058    Device Reliability:        OK
00:06:59.058    Read Only:                 No
00:06:59.058    Volatile Memory Backup:    OK
00:06:59.058  Current Temperature:         323 Kelvin (50 Celsius)
00:06:59.058  Temperature Threshold:       343 Kelvin (70 Celsius)
00:06:59.058  Available Spare:             0%
00:06:59.058  Available Spare Threshold:   0%
00:06:59.059  Life Percentage Used:        0%
00:06:59.059  Data Units Read:             3886
00:06:59.059  Data Units Written:          3760
00:06:59.059  Host Read Commands:          203475
00:06:59.059  Host Write Commands:         202364
00:06:59.059  Controller Busy Time:        0 minutes
00:06:59.059  Power Cycles:                0
00:06:59.059  Power On Hours:              0 hours
00:06:59.059  Unsafe Shutdowns:            0
00:06:59.059  Unrecoverable Media Errors:  0
00:06:59.059  Lifetime Error Log Entries:  0
00:06:59.059  Warning Temperature Time:    0 minutes
00:06:59.059  Critical Temperature Time:   0 minutes
00:06:59.059  
00:06:59.059  Number of Queues
00:06:59.059  ================
00:06:59.059  Number of I/O Submission Queues:      64
00:06:59.059  Number of I/O Completion Queues:      64
00:06:59.059  
00:06:59.059  ZNS Specific Controller Data
00:06:59.059  ============================
00:06:59.059  Zone Append Size Limit:      0
00:06:59.059  
00:06:59.059  
00:06:59.059  Active Namespaces
00:06:59.059  =================
00:06:59.059  Namespace ID:1
00:06:59.059  Error Recovery Timeout:                Unlimited
00:06:59.059  Command Set Identifier:                NVM (00h)
00:06:59.059  Deallocate:                            Supported
00:06:59.059  Deallocated/Unwritten Error:           Supported
00:06:59.059  Deallocated Read Value:                All 0x00
00:06:59.059  Deallocate in Write Zeroes:            Not Supported
00:06:59.059  Deallocated Guard Field:               0xFFFF
00:06:59.059  Flush:                                 Supported
00:06:59.059  Reservation:                           Not Supported
00:06:59.059  Namespace Sharing Capabilities:        Private
00:06:59.059  Size (in LBAs):                        1310720 (5GiB)
00:06:59.059  Capacity (in LBAs):                    1310720 (5GiB)
00:06:59.059  Utilization (in LBAs):                 1310720 (5GiB)
00:06:59.059  Thin Provisioning:                     Not Supported
00:06:59.059  Per-NS Atomic Units:                   No
00:06:59.059  Maximum Single Source Range Length:    128
00:06:59.059  Maximum Copy Length:                   128
00:06:59.059  Maximum Source Range Count:            128
00:06:59.059  NGUID/EUI64 Never Reused:              No
00:06:59.059  Namespace Write Protected:             No
00:06:59.059  Number of LBA Formats:                 8
00:06:59.059  Current LBA Format:                    LBA Format #04
00:06:59.059  LBA Format #00: Data Size:   512  Metadata Size:     0
00:06:59.059  LBA Format #01: Data Size:   512  Metadata Size:     8
00:06:59.059  LBA Format #02: Data Size:   512  Metadata Size:    16
00:06:59.059  LBA Format #03: Data Size:   512  Metadata Size:    64
00:06:59.059  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:06:59.059  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:06:59.059  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:06:59.059  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:06:59.059  
00:06:59.059  NVM Specific Namespace Data
00:06:59.059  ===========================
00:06:59.059  Logical Block Storage Tag Mask:               0
00:06:59.059  Protection Information Capabilities:
00:06:59.059    16b Guard Protection Information Storage Tag Support:  No
00:06:59.059    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:06:59.059    Storage Tag Check Read Support:                        No
00:06:59.059  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:06:59.059  
00:06:59.059  real	0m0.660s
00:06:59.059  user	0m0.239s
00:06:59.059  sys	0m0.346s
00:06:59.059   07:51:14 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:59.059   07:51:14 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:06:59.059  ************************************
00:06:59.059  END TEST nvme_identify
00:06:59.059  ************************************
00:06:59.059   07:51:14 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:06:59.059   07:51:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:59.059   07:51:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:59.059   07:51:14 nvme -- common/autotest_common.sh@10 -- # set +x
00:06:59.059  ************************************
00:06:59.059  START TEST nvme_perf
00:06:59.059  ************************************
00:06:59.059   07:51:14 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:06:59.059   07:51:14 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:07:00.435  Initializing NVMe Controllers
00:07:00.435  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:00.435  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:00.435  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:00.435  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:07:00.435  Initialization complete. Launching workers.
00:07:00.435  ========================================================
00:07:00.435                                                                             Latency(us)
00:07:00.435  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:00.435  PCIE (0000:00:10.0) NSID 1 from core  0:   43692.73     512.02    2929.33    2140.56   12071.98
00:07:00.435  PCIE (0000:00:11.0) NSID 1 from core  0:   43756.70     512.77    2923.32    1604.71   10207.66
00:07:00.435  ========================================================
00:07:00.435  Total                                  :   87449.43    1024.80    2926.33    1604.71   12071.98
00:07:00.435  
00:07:00.435  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:00.435  =================================================================================
00:07:00.435    1.00000% :  2338.444us
00:07:00.435   10.00000% :  2606.545us
00:07:00.435   25.00000% :  2740.596us
00:07:00.435   50.00000% :  2889.542us
00:07:00.435   75.00000% :  3053.382us
00:07:00.435   90.00000% :  3232.116us
00:07:00.435   95.00000% :  3366.167us
00:07:00.435   98.00000% :  3708.742us
00:07:00.435   99.00000% :  3961.949us
00:07:00.435   99.50000% :  4140.684us
00:07:00.435   99.90000% : 11677.324us
00:07:00.435   99.99000% : 12034.793us
00:07:00.435   99.99900% : 12094.371us
00:07:00.435   99.99990% : 12094.371us
00:07:00.435   99.99999% : 12094.371us
00:07:00.435  
00:07:00.435  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:00.435  =================================================================================
00:07:00.435    1.00000% :  2323.549us
00:07:00.435   10.00000% :  2606.545us
00:07:00.435   25.00000% :  2740.596us
00:07:00.435   50.00000% :  2889.542us
00:07:00.435   75.00000% :  3053.382us
00:07:00.435   90.00000% :  3232.116us
00:07:00.435   95.00000% :  3351.273us
00:07:00.435   98.00000% :  3664.058us
00:07:00.435   99.00000% :  3932.160us
00:07:00.435   99.50000% :  4140.684us
00:07:00.435   99.90000% :  9770.822us
00:07:00.435   99.99000% : 10187.869us
00:07:00.435   99.99900% : 10247.447us
00:07:00.435   99.99990% : 10247.447us
00:07:00.435   99.99999% : 10247.447us
00:07:00.435  
00:07:00.435  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:00.435  ==============================================================================
00:07:00.435         Range in us     Cumulative    IO count
00:07:00.435   2129.920 -  2144.815:    0.0046%  (        2)
00:07:00.435   2144.815 -  2159.709:    0.0229%  (        8)
00:07:00.435   2159.709 -  2174.604:    0.0503%  (       12)
00:07:00.435   2174.604 -  2189.498:    0.0846%  (       15)
00:07:00.435   2189.498 -  2204.393:    0.1075%  (       10)
00:07:00.435   2204.393 -  2219.287:    0.1601%  (       23)
00:07:00.435   2219.287 -  2234.182:    0.2150%  (       24)
00:07:00.435   2234.182 -  2249.076:    0.2837%  (       30)
00:07:00.435   2249.076 -  2263.971:    0.3706%  (       38)
00:07:00.435   2263.971 -  2278.865:    0.4758%  (       46)
00:07:00.435   2278.865 -  2293.760:    0.6131%  (       60)
00:07:00.435   2293.760 -  2308.655:    0.7435%  (       57)
00:07:00.435   2308.655 -  2323.549:    0.9059%  (       71)
00:07:00.435   2323.549 -  2338.444:    1.0981%  (       84)
00:07:00.435   2338.444 -  2353.338:    1.2971%  (       87)
00:07:00.435   2353.338 -  2368.233:    1.4939%  (       86)
00:07:00.435   2368.233 -  2383.127:    1.8004%  (      134)
00:07:00.435   2383.127 -  2398.022:    2.0864%  (      125)
00:07:00.435   2398.022 -  2412.916:    2.3929%  (      134)
00:07:00.435   2412.916 -  2427.811:    2.6766%  (      124)
00:07:00.435   2427.811 -  2442.705:    2.9740%  (      130)
00:07:00.435   2442.705 -  2457.600:    3.3080%  (      146)
00:07:00.435   2457.600 -  2472.495:    3.6740%  (      160)
00:07:00.435   2472.495 -  2487.389:    4.0996%  (      186)
00:07:00.435   2487.389 -  2502.284:    4.5502%  (      197)
00:07:00.435   2502.284 -  2517.178:    5.0421%  (      215)
00:07:00.435   2517.178 -  2532.073:    5.5980%  (      243)
00:07:00.435   2532.073 -  2546.967:    6.2637%  (      291)
00:07:00.435   2546.967 -  2561.862:    7.0621%  (      349)
00:07:00.435   2561.862 -  2576.756:    7.9429%  (      385)
00:07:00.435   2576.756 -  2591.651:    8.9632%  (      446)
00:07:00.435   2591.651 -  2606.545:   10.0499%  (      475)
00:07:00.435   2606.545 -  2621.440:   11.3058%  (      549)
00:07:00.435   2621.440 -  2636.335:   12.6830%  (      602)
00:07:00.435   2636.335 -  2651.229:   14.1243%  (      630)
00:07:00.435   2651.229 -  2666.124:   15.7966%  (      731)
00:07:00.435   2666.124 -  2681.018:   17.5444%  (      764)
00:07:00.435   2681.018 -  2695.913:   19.4523%  (      834)
00:07:00.435   2695.913 -  2710.807:   21.4289%  (      864)
00:07:00.435   2710.807 -  2725.702:   23.5382%  (      922)
00:07:00.435   2725.702 -  2740.596:   25.7595%  (      971)
00:07:00.435   2740.596 -  2755.491:   28.0175%  (      987)
00:07:00.435   2755.491 -  2770.385:   30.3464%  (     1018)
00:07:00.435   2770.385 -  2785.280:   32.7393%  (     1046)
00:07:00.435   2785.280 -  2800.175:   35.1025%  (     1033)
00:07:00.435   2800.175 -  2815.069:   37.5663%  (     1077)
00:07:00.435   2815.069 -  2829.964:   40.0577%  (     1089)
00:07:00.435   2829.964 -  2844.858:   42.5375%  (     1084)
00:07:00.435   2844.858 -  2859.753:   45.0792%  (     1111)
00:07:00.435   2859.753 -  2874.647:   47.7009%  (     1146)
00:07:00.435   2874.647 -  2889.542:   50.2539%  (     1116)
00:07:00.435   2889.542 -  2904.436:   52.7796%  (     1104)
00:07:00.435   2904.436 -  2919.331:   55.2365%  (     1074)
00:07:00.435   2919.331 -  2934.225:   57.7759%  (     1110)
00:07:00.435   2934.225 -  2949.120:   60.1940%  (     1057)
00:07:00.435   2949.120 -  2964.015:   62.5503%  (     1030)
00:07:00.435   2964.015 -  2978.909:   64.8701%  (     1014)
00:07:00.435   2978.909 -  2993.804:   67.1349%  (      990)
00:07:00.435   2993.804 -  3008.698:   69.3173%  (      954)
00:07:00.435   3008.698 -  3023.593:   71.4952%  (      952)
00:07:00.435   3023.593 -  3038.487:   73.5199%  (      885)
00:07:00.435   3038.487 -  3053.382:   75.4072%  (      825)
00:07:00.435   3053.382 -  3068.276:   77.2442%  (      803)
00:07:00.435   3068.276 -  3083.171:   78.9989%  (      767)
00:07:00.435   3083.171 -  3098.065:   80.6072%  (      703)
00:07:00.435   3098.065 -  3112.960:   82.0644%  (      637)
00:07:00.435   3112.960 -  3127.855:   83.4576%  (      609)
00:07:00.435   3127.855 -  3142.749:   84.6838%  (      536)
00:07:00.435   3142.749 -  3157.644:   85.8712%  (      519)
00:07:00.435   3157.644 -  3172.538:   86.9372%  (      466)
00:07:00.435   3172.538 -  3187.433:   87.9324%  (      435)
00:07:00.435   3187.433 -  3202.327:   88.9001%  (      423)
00:07:00.435   3202.327 -  3217.222:   89.7969%  (      392)
00:07:00.435   3217.222 -  3232.116:   90.6250%  (      362)
00:07:00.435   3232.116 -  3247.011:   91.4005%  (      339)
00:07:00.435   3247.011 -  3261.905:   92.1280%  (      318)
00:07:00.435   3261.905 -  3276.800:   92.7915%  (      290)
00:07:00.435   3276.800 -  3291.695:   93.3359%  (      238)
00:07:00.435   3291.695 -  3306.589:   93.8072%  (      206)
00:07:00.435   3306.589 -  3321.484:   94.2327%  (      186)
00:07:00.435   3321.484 -  3336.378:   94.6353%  (      176)
00:07:00.435   3336.378 -  3351.273:   94.9899%  (      155)
00:07:00.435   3351.273 -  3366.167:   95.2828%  (      128)
00:07:00.435   3366.167 -  3381.062:   95.5390%  (      112)
00:07:00.435   3381.062 -  3395.956:   95.7700%  (      101)
00:07:00.435   3395.956 -  3410.851:   95.9805%  (       92)
00:07:00.435   3410.851 -  3425.745:   96.1704%  (       83)
00:07:00.435   3425.745 -  3440.640:   96.3328%  (       71)
00:07:00.435   3440.640 -  3455.535:   96.4952%  (       71)
00:07:00.435   3455.535 -  3470.429:   96.6622%  (       73)
00:07:00.435   3470.429 -  3485.324:   96.8087%  (       64)
00:07:00.435   3485.324 -  3500.218:   96.9413%  (       58)
00:07:00.435   3500.218 -  3515.113:   97.0672%  (       55)
00:07:00.435   3515.113 -  3530.007:   97.1678%  (       44)
00:07:00.435   3530.007 -  3544.902:   97.2639%  (       42)
00:07:00.435   3544.902 -  3559.796:   97.3554%  (       40)
00:07:00.435   3559.796 -  3574.691:   97.4469%  (       40)
00:07:00.435   3574.691 -  3589.585:   97.5178%  (       31)
00:07:00.435   3589.585 -  3604.480:   97.6002%  (       36)
00:07:00.435   3604.480 -  3619.375:   97.6734%  (       32)
00:07:00.435   3619.375 -  3634.269:   97.7375%  (       28)
00:07:00.435   3634.269 -  3649.164:   97.8084%  (       31)
00:07:00.435   3649.164 -  3664.058:   97.8702%  (       27)
00:07:00.435   3664.058 -  3678.953:   97.9273%  (       25)
00:07:00.435   3678.953 -  3693.847:   97.9937%  (       29)
00:07:00.436   3693.847 -  3708.742:   98.0532%  (       26)
00:07:00.436   3708.742 -  3723.636:   98.1172%  (       28)
00:07:00.436   3723.636 -  3738.531:   98.1721%  (       24)
00:07:00.436   3738.531 -  3753.425:   98.2316%  (       26)
00:07:00.436   3753.425 -  3768.320:   98.2774%  (       20)
00:07:00.436   3768.320 -  3783.215:   98.3391%  (       27)
00:07:00.436   3783.215 -  3798.109:   98.4078%  (       30)
00:07:00.436   3798.109 -  3813.004:   98.4695%  (       27)
00:07:00.436   3813.004 -  3842.793:   98.6068%  (       60)
00:07:00.436   3842.793 -  3872.582:   98.7463%  (       61)
00:07:00.436   3872.582 -  3902.371:   98.8676%  (       53)
00:07:00.436   3902.371 -  3932.160:   98.9957%  (       56)
00:07:00.436   3932.160 -  3961.949:   99.1169%  (       53)
00:07:00.436   3961.949 -  3991.738:   99.2176%  (       44)
00:07:00.436   3991.738 -  4021.527:   99.3000%  (       36)
00:07:00.436   4021.527 -  4051.316:   99.3594%  (       26)
00:07:00.436   4051.316 -  4081.105:   99.4235%  (       28)
00:07:00.436   4081.105 -  4110.895:   99.4830%  (       26)
00:07:00.436   4110.895 -  4140.684:   99.5379%  (       24)
00:07:00.436   4140.684 -  4170.473:   99.5882%  (       22)
00:07:00.436   4170.473 -  4200.262:   99.6340%  (       20)
00:07:00.436   4200.262 -  4230.051:   99.6591%  (       11)
00:07:00.436   4230.051 -  4259.840:   99.6820%  (       10)
00:07:00.436   4259.840 -  4289.629:   99.6980%  (        7)
00:07:00.436   4289.629 -  4319.418:   99.7072%  (        4)
00:07:00.436   8996.305 -  9055.884:   99.7186%  (        5)
00:07:00.436   9055.884 -  9115.462:   99.7323%  (        6)
00:07:00.436   9115.462 -  9175.040:   99.7438%  (        5)
00:07:00.436   9175.040 -  9234.618:   99.7598%  (        7)
00:07:00.436   9234.618 -  9294.196:   99.7712%  (        5)
00:07:00.436   9294.196 -  9353.775:   99.7827%  (        5)
00:07:00.436   9353.775 -  9413.353:   99.7964%  (        6)
00:07:00.436   9413.353 -  9472.931:   99.8055%  (        4)
00:07:00.436   9472.931 -  9532.509:   99.8193%  (        6)
00:07:00.436   9532.509 -  9592.087:   99.8330%  (        6)
00:07:00.436   9592.087 -  9651.665:   99.8353%  (        1)
00:07:00.436   9770.822 -  9830.400:   99.8421%  (        3)
00:07:00.436   9830.400 -  9889.978:   99.8536%  (        5)
00:07:00.436  11379.433 - 11439.011:   99.8559%  (        1)
00:07:00.436  11439.011 - 11498.589:   99.8696%  (        6)
00:07:00.436  11498.589 - 11558.167:   99.8833%  (        6)
00:07:00.436  11558.167 - 11617.745:   99.8971%  (        6)
00:07:00.436  11617.745 - 11677.324:   99.9108%  (        6)
00:07:00.436  11677.324 - 11736.902:   99.9245%  (        6)
00:07:00.436  11736.902 - 11796.480:   99.9382%  (        6)
00:07:00.436  11796.480 - 11856.058:   99.9520%  (        6)
00:07:00.436  11856.058 - 11915.636:   99.9657%  (        6)
00:07:00.436  11915.636 - 11975.215:   99.9771%  (        5)
00:07:00.436  11975.215 - 12034.793:   99.9908%  (        6)
00:07:00.436  12034.793 - 12094.371:  100.0000%  (        4)
00:07:00.436  
00:07:00.436  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:00.436  ==============================================================================
00:07:00.436         Range in us     Cumulative    IO count
00:07:00.436   1601.164 -  1608.611:    0.0023%  (        1)
00:07:00.436   1608.611 -  1616.058:    0.0046%  (        1)
00:07:00.436   1616.058 -  1623.505:    0.0069%  (        1)
00:07:00.436   1623.505 -  1630.953:    0.0091%  (        1)
00:07:00.436   1630.953 -  1638.400:    0.0137%  (        2)
00:07:00.436   1638.400 -  1645.847:    0.0160%  (        1)
00:07:00.436   1645.847 -  1653.295:    0.0183%  (        1)
00:07:00.436   1653.295 -  1660.742:    0.0206%  (        1)
00:07:00.436   1660.742 -  1668.189:    0.0228%  (        1)
00:07:00.436   1668.189 -  1675.636:    0.0274%  (        2)
00:07:00.436   1675.636 -  1683.084:    0.0297%  (        1)
00:07:00.436   1683.084 -  1690.531:    0.0343%  (        2)
00:07:00.436   1690.531 -  1697.978:    0.0365%  (        1)
00:07:00.436   1697.978 -  1705.425:    0.0388%  (        1)
00:07:00.436   1705.425 -  1712.873:    0.0434%  (        2)
00:07:00.436   1712.873 -  1720.320:    0.0457%  (        1)
00:07:00.436   1720.320 -  1727.767:    0.0503%  (        2)
00:07:00.436   1727.767 -  1735.215:    0.0525%  (        1)
00:07:00.436   1735.215 -  1742.662:    0.0571%  (        2)
00:07:00.436   1742.662 -  1750.109:    0.0594%  (        1)
00:07:00.436   1750.109 -  1757.556:    0.0640%  (        2)
00:07:00.436   1757.556 -  1765.004:    0.0662%  (        1)
00:07:00.436   1765.004 -  1772.451:    0.0708%  (        2)
00:07:00.436   1772.451 -  1779.898:    0.0731%  (        1)
00:07:00.436   1779.898 -  1787.345:    0.0777%  (        2)
00:07:00.436   1787.345 -  1794.793:    0.0800%  (        1)
00:07:00.436   1794.793 -  1802.240:    0.0822%  (        1)
00:07:00.436   1802.240 -  1809.687:    0.0868%  (        2)
00:07:00.436   1809.687 -  1817.135:    0.0914%  (        2)
00:07:00.436   1817.135 -  1824.582:    0.0937%  (        1)
00:07:00.436   1824.582 -  1832.029:    0.0982%  (        2)
00:07:00.436   1832.029 -  1839.476:    0.1005%  (        1)
00:07:00.436   1839.476 -  1846.924:    0.1051%  (        2)
00:07:00.436   1846.924 -  1854.371:    0.1074%  (        1)
00:07:00.436   1854.371 -  1861.818:    0.1119%  (        2)
00:07:00.436   1861.818 -  1869.265:    0.1142%  (        1)
00:07:00.436   1869.265 -  1876.713:    0.1165%  (        1)
00:07:00.436   1876.713 -  1884.160:    0.1211%  (        2)
00:07:00.436   1884.160 -  1891.607:    0.1234%  (        1)
00:07:00.436   1891.607 -  1899.055:    0.1279%  (        2)
00:07:00.436   1899.055 -  1906.502:    0.1302%  (        1)
00:07:00.436   1906.502 -  1921.396:    0.1393%  (        4)
00:07:00.436   1921.396 -  1936.291:    0.1439%  (        2)
00:07:00.436   1936.291 -  1951.185:    0.1462%  (        1)
00:07:00.436   2085.236 -  2100.131:    0.1485%  (        1)
00:07:00.436   2115.025 -  2129.920:    0.1508%  (        1)
00:07:00.436   2129.920 -  2144.815:    0.1576%  (        3)
00:07:00.436   2144.815 -  2159.709:    0.1668%  (        4)
00:07:00.436   2159.709 -  2174.604:    0.1736%  (        3)
00:07:00.436   2174.604 -  2189.498:    0.1827%  (        4)
00:07:00.436   2189.498 -  2204.393:    0.2124%  (       13)
00:07:00.436   2204.393 -  2219.287:    0.2467%  (       15)
00:07:00.436   2219.287 -  2234.182:    0.3038%  (       25)
00:07:00.436   2234.182 -  2249.076:    0.3724%  (       30)
00:07:00.436   2249.076 -  2263.971:    0.4569%  (       37)
00:07:00.436   2263.971 -  2278.865:    0.5711%  (       50)
00:07:00.436   2278.865 -  2293.760:    0.7036%  (       58)
00:07:00.436   2293.760 -  2308.655:    0.8475%  (       63)
00:07:00.436   2308.655 -  2323.549:    1.0394%  (       84)
00:07:00.436   2323.549 -  2338.444:    1.2198%  (       79)
00:07:00.436   2338.444 -  2353.338:    1.4574%  (      104)
00:07:00.436   2353.338 -  2368.233:    1.6562%  (       87)
00:07:00.436   2368.233 -  2383.127:    1.8960%  (      105)
00:07:00.436   2383.127 -  2398.022:    2.1336%  (      104)
00:07:00.436   2398.022 -  2412.916:    2.3894%  (      112)
00:07:00.436   2412.916 -  2427.811:    2.6407%  (      110)
00:07:00.436   2427.811 -  2442.705:    2.9194%  (      122)
00:07:00.436   2442.705 -  2457.600:    3.2506%  (      145)
00:07:00.436   2457.600 -  2472.495:    3.6139%  (      159)
00:07:00.436   2472.495 -  2487.389:    4.0205%  (      178)
00:07:00.436   2487.389 -  2502.284:    4.5025%  (      211)
00:07:00.436   2502.284 -  2517.178:    5.0461%  (      238)
00:07:00.436   2517.178 -  2532.073:    5.6378%  (      259)
00:07:00.436   2532.073 -  2546.967:    6.3208%  (      299)
00:07:00.436   2546.967 -  2561.862:    7.0906%  (      337)
00:07:00.436   2561.862 -  2576.756:    7.9564%  (      379)
00:07:00.436   2576.756 -  2591.651:    8.9250%  (      424)
00:07:00.436   2591.651 -  2606.545:   10.0489%  (      492)
00:07:00.436   2606.545 -  2621.440:   11.2756%  (      537)
00:07:00.436   2621.440 -  2636.335:   12.6759%  (      613)
00:07:00.436   2636.335 -  2651.229:   14.1767%  (      657)
00:07:00.436   2651.229 -  2666.124:   15.8694%  (      741)
00:07:00.436   2666.124 -  2681.018:   17.6969%  (      800)
00:07:00.436   2681.018 -  2695.913:   19.5792%  (      824)
00:07:00.436   2695.913 -  2710.807:   21.5506%  (      863)
00:07:00.436   2710.807 -  2725.702:   23.6499%  (      919)
00:07:00.436   2725.702 -  2740.596:   25.8589%  (      967)
00:07:00.436   2740.596 -  2755.491:   28.0816%  (      973)
00:07:00.436   2755.491 -  2770.385:   30.3934%  (     1012)
00:07:00.436   2770.385 -  2785.280:   32.8171%  (     1061)
00:07:00.436   2785.280 -  2800.175:   35.1951%  (     1041)
00:07:00.436   2800.175 -  2815.069:   37.5845%  (     1046)
00:07:00.436   2815.069 -  2829.964:   40.0882%  (     1096)
00:07:00.436   2829.964 -  2844.858:   42.5804%  (     1091)
00:07:00.436   2844.858 -  2859.753:   45.0886%  (     1098)
00:07:00.436   2859.753 -  2874.647:   47.5969%  (     1098)
00:07:00.436   2874.647 -  2889.542:   50.1005%  (     1096)
00:07:00.436   2889.542 -  2904.436:   52.5905%  (     1090)
00:07:00.436   2904.436 -  2919.331:   55.0644%  (     1083)
00:07:00.436   2919.331 -  2934.225:   57.5612%  (     1093)
00:07:00.436   2934.225 -  2949.120:   60.0101%  (     1072)
00:07:00.436   2949.120 -  2964.015:   62.3447%  (     1022)
00:07:00.436   2964.015 -  2978.909:   64.6496%  (     1009)
00:07:00.436   2978.909 -  2993.804:   66.9773%  (     1019)
00:07:00.436   2993.804 -  3008.698:   69.1612%  (      956)
00:07:00.436   3008.698 -  3023.593:   71.2696%  (      923)
00:07:00.436   3023.593 -  3038.487:   73.2479%  (      866)
00:07:00.436   3038.487 -  3053.382:   75.2147%  (      861)
00:07:00.436   3053.382 -  3068.276:   77.0993%  (      825)
00:07:00.436   3068.276 -  3083.171:   78.8537%  (      768)
00:07:00.436   3083.171 -  3098.065:   80.5122%  (      726)
00:07:00.436   3098.065 -  3112.960:   81.9856%  (      645)
00:07:00.436   3112.960 -  3127.855:   83.3653%  (      604)
00:07:00.436   3127.855 -  3142.749:   84.6103%  (      545)
00:07:00.436   3142.749 -  3157.644:   85.8187%  (      529)
00:07:00.436   3157.644 -  3172.538:   86.8992%  (      473)
00:07:00.436   3172.538 -  3187.433:   87.8975%  (      437)
00:07:00.436   3187.433 -  3202.327:   88.8798%  (      430)
00:07:00.436   3202.327 -  3217.222:   89.7615%  (      386)
00:07:00.436   3217.222 -  3232.116:   90.5953%  (      365)
00:07:00.436   3232.116 -  3247.011:   91.3925%  (      349)
00:07:00.436   3247.011 -  3261.905:   92.1235%  (      320)
00:07:00.436   3261.905 -  3276.800:   92.8020%  (      297)
00:07:00.436   3276.800 -  3291.695:   93.3959%  (      260)
00:07:00.436   3291.695 -  3306.589:   93.8916%  (      217)
00:07:00.436   3306.589 -  3321.484:   94.3622%  (      206)
00:07:00.436   3321.484 -  3336.378:   94.7505%  (      170)
00:07:00.436   3336.378 -  3351.273:   95.1001%  (      153)
00:07:00.436   3351.273 -  3366.167:   95.4130%  (      137)
00:07:00.436   3366.167 -  3381.062:   95.6826%  (      118)
00:07:00.436   3381.062 -  3395.956:   95.9247%  (      106)
00:07:00.436   3395.956 -  3410.851:   96.1257%  (       88)
00:07:00.436   3410.851 -  3425.745:   96.3222%  (       86)
00:07:00.437   3425.745 -  3440.640:   96.5072%  (       81)
00:07:00.437   3440.640 -  3455.535:   96.6785%  (       75)
00:07:00.437   3455.535 -  3470.429:   96.8042%  (       55)
00:07:00.437   3470.429 -  3485.324:   96.9298%  (       55)
00:07:00.437   3485.324 -  3500.218:   97.0577%  (       56)
00:07:00.437   3500.218 -  3515.113:   97.1651%  (       47)
00:07:00.437   3515.113 -  3530.007:   97.2679%  (       45)
00:07:00.437   3530.007 -  3544.902:   97.3707%  (       45)
00:07:00.437   3544.902 -  3559.796:   97.4484%  (       34)
00:07:00.437   3559.796 -  3574.691:   97.5306%  (       36)
00:07:00.437   3574.691 -  3589.585:   97.6060%  (       33)
00:07:00.437   3589.585 -  3604.480:   97.6997%  (       41)
00:07:00.437   3604.480 -  3619.375:   97.7796%  (       35)
00:07:00.437   3619.375 -  3634.269:   97.8573%  (       34)
00:07:00.437   3634.269 -  3649.164:   97.9327%  (       33)
00:07:00.437   3649.164 -  3664.058:   98.0035%  (       31)
00:07:00.437   3664.058 -  3678.953:   98.0789%  (       33)
00:07:00.437   3678.953 -  3693.847:   98.1497%  (       31)
00:07:00.437   3693.847 -  3708.742:   98.2228%  (       32)
00:07:00.437   3708.742 -  3723.636:   98.2844%  (       27)
00:07:00.437   3723.636 -  3738.531:   98.3484%  (       28)
00:07:00.437   3738.531 -  3753.425:   98.4078%  (       26)
00:07:00.437   3753.425 -  3768.320:   98.4740%  (       29)
00:07:00.437   3768.320 -  3783.215:   98.5380%  (       28)
00:07:00.437   3783.215 -  3798.109:   98.6043%  (       29)
00:07:00.437   3798.109 -  3813.004:   98.6796%  (       33)
00:07:00.437   3813.004 -  3842.793:   98.8007%  (       53)
00:07:00.437   3842.793 -  3872.582:   98.9104%  (       48)
00:07:00.437   3872.582 -  3902.371:   98.9903%  (       35)
00:07:00.437   3902.371 -  3932.160:   99.0771%  (       38)
00:07:00.437   3932.160 -  3961.949:   99.1639%  (       38)
00:07:00.437   3961.949 -  3991.738:   99.2439%  (       35)
00:07:00.437   3991.738 -  4021.527:   99.3170%  (       32)
00:07:00.437   4021.527 -  4051.316:   99.3855%  (       30)
00:07:00.437   4051.316 -  4081.105:   99.4449%  (       26)
00:07:00.437   4081.105 -  4110.895:   99.4929%  (       21)
00:07:00.437   4110.895 -  4140.684:   99.5431%  (       22)
00:07:00.437   4140.684 -  4170.473:   99.5980%  (       24)
00:07:00.437   4170.473 -  4200.262:   99.6459%  (       21)
00:07:00.437   4200.262 -  4230.051:   99.6733%  (       12)
00:07:00.437   4230.051 -  4259.840:   99.6848%  (        5)
00:07:00.437   4259.840 -  4289.629:   99.6916%  (        3)
00:07:00.437   4289.629 -  4319.418:   99.7030%  (        5)
00:07:00.437   4319.418 -  4349.207:   99.7076%  (        2)
00:07:00.437   7804.742 -  7864.320:   99.7190%  (        5)
00:07:00.437   7864.320 -  7923.898:   99.7327%  (        6)
00:07:00.437   7923.898 -  7983.476:   99.7464%  (        6)
00:07:00.437   7983.476 -  8043.055:   99.7601%  (        6)
00:07:00.437   8043.055 -  8102.633:   99.7738%  (        6)
00:07:00.437   8102.633 -  8162.211:   99.7876%  (        6)
00:07:00.437   8162.211 -  8221.789:   99.8035%  (        7)
00:07:00.437   8221.789 -  8281.367:   99.8150%  (        5)
00:07:00.437   8281.367 -  8340.945:   99.8310%  (        7)
00:07:00.437   8340.945 -  8400.524:   99.8424%  (        5)
00:07:00.437   8400.524 -  8460.102:   99.8538%  (        5)
00:07:00.437   9472.931 -  9532.509:   99.8652%  (        5)
00:07:00.437   9532.509 -  9592.087:   99.8721%  (        3)
00:07:00.437   9592.087 -  9651.665:   99.8858%  (        6)
00:07:00.437   9651.665 -  9711.244:   99.8972%  (        5)
00:07:00.437   9711.244 -  9770.822:   99.9132%  (        7)
00:07:00.437   9770.822 -  9830.400:   99.9200%  (        3)
00:07:00.437   9830.400 -  9889.978:   99.9360%  (        7)
00:07:00.437   9889.978 -  9949.556:   99.9497%  (        6)
00:07:00.437   9949.556 - 10009.135:   99.9566%  (        3)
00:07:00.437  10009.135 - 10068.713:   99.9680%  (        5)
00:07:00.437  10068.713 - 10128.291:   99.9840%  (        7)
00:07:00.437  10128.291 - 10187.869:   99.9931%  (        4)
00:07:00.437  10187.869 - 10247.447:  100.0000%  (        3)
00:07:00.437  
00:07:00.437   07:51:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:07:01.373  Initializing NVMe Controllers
00:07:01.373  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:01.373  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:01.373  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:01.373  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:07:01.373  Initialization complete. Launching workers.
00:07:01.373  ========================================================
00:07:01.373                                                                             Latency(us)
00:07:01.373  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:01.373  PCIE (0000:00:10.0) NSID 1 from core  0:   37385.38     438.11    3424.74    1713.12   10072.02
00:07:01.373  PCIE (0000:00:11.0) NSID 1 from core  0:   37449.29     438.86    3418.38    1833.82    8007.65
00:07:01.373  ========================================================
00:07:01.373  Total                                  :   74834.67     876.97    3421.56    1713.12   10072.02
00:07:01.373  
00:07:01.373  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:01.373  =================================================================================
00:07:01.373    1.00000% :  2695.913us
00:07:01.373   10.00000% :  2949.120us
00:07:01.373   25.00000% :  3157.644us
00:07:01.373   50.00000% :  3366.167us
00:07:01.373   75.00000% :  3619.375us
00:07:01.373   90.00000% :  3932.160us
00:07:01.373   95.00000% :  4170.473us
00:07:01.373   98.00000% :  4498.153us
00:07:01.373   99.00000% :  4796.044us
00:07:01.373   99.50000% :  5004.567us
00:07:01.373   99.90000% :  9770.822us
00:07:01.373   99.99000% : 10068.713us
00:07:01.373   99.99900% : 10128.291us
00:07:01.373   99.99990% : 10128.291us
00:07:01.373   99.99999% : 10128.291us
00:07:01.373  
00:07:01.373  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:01.373  =================================================================================
00:07:01.373    1.00000% :  2681.018us
00:07:01.373   10.00000% :  2978.909us
00:07:01.373   25.00000% :  3172.538us
00:07:01.373   50.00000% :  3366.167us
00:07:01.373   75.00000% :  3619.375us
00:07:01.373   90.00000% :  3932.160us
00:07:01.373   95.00000% :  4140.684us
00:07:01.373   98.00000% :  4438.575us
00:07:01.373   99.00000% :  4676.887us
00:07:01.373   99.50000% :  4885.411us
00:07:01.373   99.90000% :  7685.585us
00:07:01.373   99.99000% :  7983.476us
00:07:01.373   99.99900% :  8043.055us
00:07:01.373   99.99990% :  8043.055us
00:07:01.373   99.99999% :  8043.055us
00:07:01.373  
00:07:01.373  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:01.373  ==============================================================================
00:07:01.373         Range in us     Cumulative    IO count
00:07:01.373   1712.873 -  1720.320:    0.0027%  (        1)
00:07:01.373   2487.389 -  2502.284:    0.0107%  (        3)
00:07:01.373   2502.284 -  2517.178:    0.0214%  (        4)
00:07:01.373   2517.178 -  2532.073:    0.0294%  (        3)
00:07:01.373   2532.073 -  2546.967:    0.0507%  (        8)
00:07:01.373   2546.967 -  2561.862:    0.0801%  (       11)
00:07:01.373   2561.862 -  2576.756:    0.1042%  (        9)
00:07:01.373   2576.756 -  2591.651:    0.1496%  (       17)
00:07:01.373   2591.651 -  2606.545:    0.2377%  (       33)
00:07:01.373   2606.545 -  2621.440:    0.3472%  (       41)
00:07:01.373   2621.440 -  2636.335:    0.4247%  (       29)
00:07:01.373   2636.335 -  2651.229:    0.5422%  (       44)
00:07:01.373   2651.229 -  2666.124:    0.6597%  (       44)
00:07:01.373   2666.124 -  2681.018:    0.8600%  (       75)
00:07:01.373   2681.018 -  2695.913:    1.0710%  (       79)
00:07:01.374   2695.913 -  2710.807:    1.3034%  (       87)
00:07:01.374   2710.807 -  2725.702:    1.5304%  (       85)
00:07:01.374   2725.702 -  2740.596:    1.8990%  (      138)
00:07:01.374   2740.596 -  2755.491:    2.2169%  (      119)
00:07:01.374   2755.491 -  2770.385:    2.5454%  (      123)
00:07:01.374   2770.385 -  2785.280:    2.9247%  (      142)
00:07:01.374   2785.280 -  2800.175:    3.2879%  (      136)
00:07:01.374   2800.175 -  2815.069:    3.7847%  (      186)
00:07:01.374   2815.069 -  2829.964:    4.2895%  (      189)
00:07:01.374   2829.964 -  2844.858:    4.8451%  (      208)
00:07:01.374   2844.858 -  2859.753:    5.4193%  (      215)
00:07:01.374   2859.753 -  2874.647:    6.0096%  (      221)
00:07:01.374   2874.647 -  2889.542:    6.7147%  (      264)
00:07:01.374   2889.542 -  2904.436:    7.5214%  (      302)
00:07:01.374   2904.436 -  2919.331:    8.3093%  (      295)
00:07:01.374   2919.331 -  2934.225:    9.1800%  (      326)
00:07:01.374   2934.225 -  2949.120:   10.0935%  (      342)
00:07:01.374   2949.120 -  2964.015:   10.9482%  (      320)
00:07:01.374   2964.015 -  2978.909:   11.8697%  (      345)
00:07:01.374   2978.909 -  2993.804:   12.8739%  (      376)
00:07:01.374   2993.804 -  3008.698:   13.7767%  (      338)
00:07:01.374   3008.698 -  3023.593:   14.8104%  (      387)
00:07:01.374   3023.593 -  3038.487:   15.8253%  (      380)
00:07:01.374   3038.487 -  3053.382:   16.7708%  (      354)
00:07:01.374   3053.382 -  3068.276:   17.7537%  (      368)
00:07:01.374   3068.276 -  3083.171:   18.8221%  (      400)
00:07:01.374   3083.171 -  3098.065:   19.9519%  (      423)
00:07:01.374   3098.065 -  3112.960:   21.2473%  (      485)
00:07:01.374   3112.960 -  3127.855:   22.4813%  (      462)
00:07:01.374   3127.855 -  3142.749:   23.9637%  (      555)
00:07:01.374   3142.749 -  3157.644:   25.3018%  (      501)
00:07:01.374   3157.644 -  3172.538:   26.6934%  (      521)
00:07:01.374   3172.538 -  3187.433:   28.2719%  (      591)
00:07:01.374   3187.433 -  3202.327:   29.8905%  (      606)
00:07:01.374   3202.327 -  3217.222:   31.5251%  (      612)
00:07:01.374   3217.222 -  3232.116:   33.1891%  (      623)
00:07:01.374   3232.116 -  3247.011:   34.9332%  (      653)
00:07:01.374   3247.011 -  3261.905:   36.8082%  (      702)
00:07:01.374   3261.905 -  3276.800:   38.6165%  (      677)
00:07:01.374   3276.800 -  3291.695:   40.5769%  (      734)
00:07:01.374   3291.695 -  3306.589:   42.6255%  (      767)
00:07:01.374   3306.589 -  3321.484:   44.6341%  (      752)
00:07:01.374   3321.484 -  3336.378:   46.5465%  (      716)
00:07:01.374   3336.378 -  3351.273:   48.2399%  (      634)
00:07:01.374   3351.273 -  3366.167:   50.0721%  (      686)
00:07:01.374   3366.167 -  3381.062:   52.0459%  (      739)
00:07:01.374   3381.062 -  3395.956:   54.0224%  (      740)
00:07:01.374   3395.956 -  3410.851:   55.8093%  (      669)
00:07:01.374   3410.851 -  3425.745:   57.5668%  (      658)
00:07:01.374   3425.745 -  3440.640:   59.3216%  (      657)
00:07:01.374   3440.640 -  3455.535:   60.9722%  (      618)
00:07:01.374   3455.535 -  3470.429:   62.5374%  (      586)
00:07:01.374   3470.429 -  3485.324:   64.1373%  (      599)
00:07:01.374   3485.324 -  3500.218:   65.6437%  (      564)
00:07:01.374   3500.218 -  3515.113:   67.0673%  (      533)
00:07:01.374   3515.113 -  3530.007:   68.5203%  (      544)
00:07:01.374   3530.007 -  3544.902:   69.7756%  (      470)
00:07:01.374   3544.902 -  3559.796:   71.0176%  (      465)
00:07:01.374   3559.796 -  3574.691:   72.1875%  (      438)
00:07:01.374   3574.691 -  3589.585:   73.3360%  (      430)
00:07:01.374   3589.585 -  3604.480:   74.4311%  (      410)
00:07:01.374   3604.480 -  3619.375:   75.6010%  (      438)
00:07:01.374   3619.375 -  3634.269:   76.6400%  (      389)
00:07:01.374   3634.269 -  3649.164:   77.5828%  (      353)
00:07:01.374   3649.164 -  3664.058:   78.4642%  (      330)
00:07:01.374   3664.058 -  3678.953:   79.3376%  (      327)
00:07:01.374   3678.953 -  3693.847:   80.2537%  (      343)
00:07:01.374   3693.847 -  3708.742:   81.0844%  (      311)
00:07:01.374   3708.742 -  3723.636:   81.9257%  (      315)
00:07:01.374   3723.636 -  3738.531:   82.8098%  (      331)
00:07:01.374   3738.531 -  3753.425:   83.5550%  (      279)
00:07:01.374   3753.425 -  3768.320:   84.2895%  (      275)
00:07:01.374   3768.320 -  3783.215:   84.9733%  (      256)
00:07:01.374   3783.215 -  3798.109:   85.6651%  (      259)
00:07:01.374   3798.109 -  3813.004:   86.3114%  (      242)
00:07:01.374   3813.004 -  3842.793:   87.5374%  (      459)
00:07:01.374   3842.793 -  3872.582:   88.6779%  (      427)
00:07:01.374   3872.582 -  3902.371:   89.6181%  (      352)
00:07:01.374   3902.371 -  3932.160:   90.5155%  (      336)
00:07:01.374   3932.160 -  3961.949:   91.3301%  (      305)
00:07:01.374   3961.949 -  3991.738:   92.0593%  (      273)
00:07:01.374   3991.738 -  4021.527:   92.6709%  (      229)
00:07:01.374   4021.527 -  4051.316:   93.3040%  (      237)
00:07:01.374   4051.316 -  4081.105:   93.8702%  (      212)
00:07:01.374   4081.105 -  4110.895:   94.4017%  (      199)
00:07:01.374   4110.895 -  4140.684:   94.9519%  (      206)
00:07:01.374   4140.684 -  4170.473:   95.3579%  (      152)
00:07:01.374   4170.473 -  4200.262:   95.6891%  (      124)
00:07:01.374   4200.262 -  4230.051:   95.9829%  (      110)
00:07:01.374   4230.051 -  4259.840:   96.3194%  (      126)
00:07:01.374   4259.840 -  4289.629:   96.6426%  (      121)
00:07:01.374   4289.629 -  4319.418:   96.9257%  (      106)
00:07:01.374   4319.418 -  4349.207:   97.1688%  (       91)
00:07:01.374   4349.207 -  4378.996:   97.3905%  (       83)
00:07:01.374   4378.996 -  4408.785:   97.5988%  (       78)
00:07:01.374   4408.785 -  4438.575:   97.7911%  (       72)
00:07:01.374   4438.575 -  4468.364:   97.9354%  (       54)
00:07:01.374   4468.364 -  4498.153:   98.0449%  (       41)
00:07:01.374   4498.153 -  4527.942:   98.1490%  (       39)
00:07:01.374   4527.942 -  4557.731:   98.2212%  (       27)
00:07:01.374   4557.731 -  4587.520:   98.2719%  (       19)
00:07:01.374   4587.520 -  4617.309:   98.3467%  (       28)
00:07:01.374   4617.309 -  4647.098:   98.4348%  (       33)
00:07:01.374   4647.098 -  4676.887:   98.5630%  (       48)
00:07:01.374   4676.887 -  4706.676:   98.6672%  (       39)
00:07:01.374   4706.676 -  4736.465:   98.7714%  (       39)
00:07:01.374   4736.465 -  4766.255:   98.8729%  (       38)
00:07:01.374   4766.255 -  4796.044:   99.0091%  (       51)
00:07:01.374   4796.044 -  4825.833:   99.1132%  (       39)
00:07:01.374   4825.833 -  4855.622:   99.2121%  (       37)
00:07:01.374   4855.622 -  4885.411:   99.2922%  (       30)
00:07:01.374   4885.411 -  4915.200:   99.3697%  (       29)
00:07:01.374   4915.200 -  4944.989:   99.4418%  (       27)
00:07:01.374   4944.989 -  4974.778:   99.4952%  (       20)
00:07:01.374   4974.778 -  5004.567:   99.5379%  (       16)
00:07:01.374   5004.567 -  5034.356:   99.5700%  (       12)
00:07:01.374   5034.356 -  5064.145:   99.5940%  (        9)
00:07:01.374   5064.145 -  5093.935:   99.5994%  (        2)
00:07:01.374   5093.935 -  5123.724:   99.6127%  (        5)
00:07:01.374   5123.724 -  5153.513:   99.6234%  (        4)
00:07:01.374   5153.513 -  5183.302:   99.6368%  (        5)
00:07:01.374   5183.302 -  5213.091:   99.6474%  (        4)
00:07:01.374   5213.091 -  5242.880:   99.6581%  (        4)
00:07:01.374   6523.811 -  6553.600:   99.7089%  (       19)
00:07:01.374   6553.600 -  6583.389:   99.7356%  (       10)
00:07:01.374   6583.389 -  6613.178:   99.7436%  (        3)
00:07:01.374   6613.178 -  6642.967:   99.7489%  (        2)
00:07:01.374   6642.967 -  6672.756:   99.7569%  (        3)
00:07:01.374   6672.756 -  6702.545:   99.7650%  (        3)
00:07:01.374   6702.545 -  6732.335:   99.7730%  (        3)
00:07:01.374   6732.335 -  6762.124:   99.7783%  (        2)
00:07:01.374   6762.124 -  6791.913:   99.7863%  (        3)
00:07:01.374   6791.913 -  6821.702:   99.7943%  (        3)
00:07:01.374   6821.702 -  6851.491:   99.8024%  (        3)
00:07:01.374   6851.491 -  6881.280:   99.8104%  (        3)
00:07:01.374   6881.280 -  6911.069:   99.8157%  (        2)
00:07:01.374   6911.069 -  6940.858:   99.8237%  (        3)
00:07:01.374   6940.858 -  6970.647:   99.8291%  (        2)
00:07:01.374   9413.353 -  9472.931:   99.8317%  (        1)
00:07:01.374   9472.931 -  9532.509:   99.8451%  (        5)
00:07:01.374   9532.509 -  9592.087:   99.8638%  (        7)
00:07:01.374   9592.087 -  9651.665:   99.8825%  (        7)
00:07:01.374   9651.665 -  9711.244:   99.8985%  (        6)
00:07:01.374   9711.244 -  9770.822:   99.9172%  (        7)
00:07:01.374   9770.822 -  9830.400:   99.9306%  (        5)
00:07:01.374   9830.400 -  9889.978:   99.9493%  (        7)
00:07:01.374   9889.978 -  9949.556:   99.9653%  (        6)
00:07:01.374   9949.556 - 10009.135:   99.9813%  (        6)
00:07:01.374  10009.135 - 10068.713:   99.9973%  (        6)
00:07:01.374  10068.713 - 10128.291:  100.0000%  (        1)
00:07:01.374  
00:07:01.374  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:01.374  ==============================================================================
00:07:01.374         Range in us     Cumulative    IO count
00:07:01.374   1832.029 -  1839.476:    0.0027%  (        1)
00:07:01.374   1995.869 -  2010.764:    0.0053%  (        1)
00:07:01.374   2353.338 -  2368.233:    0.0160%  (        4)
00:07:01.374   2368.233 -  2383.127:    0.0213%  (        2)
00:07:01.374   2383.127 -  2398.022:    0.0400%  (        7)
00:07:01.374   2398.022 -  2412.916:    0.0480%  (        3)
00:07:01.374   2412.916 -  2427.811:    0.0560%  (        3)
00:07:01.374   2427.811 -  2442.705:    0.0640%  (        3)
00:07:01.374   2442.705 -  2457.600:    0.0720%  (        3)
00:07:01.374   2457.600 -  2472.495:    0.0800%  (        3)
00:07:01.374   2472.495 -  2487.389:    0.0933%  (        5)
00:07:01.374   2487.389 -  2502.284:    0.1013%  (        3)
00:07:01.374   2502.284 -  2517.178:    0.1200%  (        7)
00:07:01.374   2517.178 -  2532.073:    0.1467%  (       10)
00:07:01.374   2532.073 -  2546.967:    0.1733%  (       10)
00:07:01.374   2546.967 -  2561.862:    0.2160%  (       16)
00:07:01.374   2561.862 -  2576.756:    0.2693%  (       20)
00:07:01.374   2576.756 -  2591.651:    0.3146%  (       17)
00:07:01.374   2591.651 -  2606.545:    0.3813%  (       25)
00:07:01.374   2606.545 -  2621.440:    0.4719%  (       34)
00:07:01.374   2621.440 -  2636.335:    0.6426%  (       64)
00:07:01.374   2636.335 -  2651.229:    0.7999%  (       59)
00:07:01.374   2651.229 -  2666.124:    0.9359%  (       51)
00:07:01.374   2666.124 -  2681.018:    1.1465%  (       79)
00:07:01.374   2681.018 -  2695.913:    1.3119%  (       62)
00:07:01.374   2695.913 -  2710.807:    1.5145%  (       76)
00:07:01.374   2710.807 -  2725.702:    1.7491%  (       88)
00:07:01.374   2725.702 -  2740.596:    2.0025%  (       95)
00:07:01.374   2740.596 -  2755.491:    2.3571%  (      133)
00:07:01.374   2755.491 -  2770.385:    2.7250%  (      138)
00:07:01.374   2770.385 -  2785.280:    3.1757%  (      169)
00:07:01.374   2785.280 -  2800.175:    3.6049%  (      161)
00:07:01.374   2800.175 -  2815.069:    4.0396%  (      163)
00:07:01.375   2815.069 -  2829.964:    4.4529%  (      155)
00:07:01.375   2829.964 -  2844.858:    4.9568%  (      189)
00:07:01.375   2844.858 -  2859.753:    5.3994%  (      166)
00:07:01.375   2859.753 -  2874.647:    5.8954%  (      186)
00:07:01.375   2874.647 -  2889.542:    6.4180%  (      196)
00:07:01.375   2889.542 -  2904.436:    6.8846%  (      175)
00:07:01.375   2904.436 -  2919.331:    7.4765%  (      222)
00:07:01.375   2919.331 -  2934.225:    8.0551%  (      217)
00:07:01.375   2934.225 -  2949.120:    8.6257%  (      214)
00:07:01.375   2949.120 -  2964.015:    9.2283%  (      226)
00:07:01.375   2964.015 -  2978.909:   10.0149%  (      295)
00:07:01.375   2978.909 -  2993.804:   10.8442%  (      311)
00:07:01.375   2993.804 -  3008.698:   11.7454%  (      338)
00:07:01.375   3008.698 -  3023.593:   12.7613%  (      381)
00:07:01.375   3023.593 -  3038.487:   13.7372%  (      366)
00:07:01.375   3038.487 -  3053.382:   14.7451%  (      378)
00:07:01.375   3053.382 -  3068.276:   15.8756%  (      424)
00:07:01.375   3068.276 -  3083.171:   17.1155%  (      465)
00:07:01.375   3083.171 -  3098.065:   18.5047%  (      521)
00:07:01.375   3098.065 -  3112.960:   20.0272%  (      571)
00:07:01.375   3112.960 -  3127.855:   21.4884%  (      548)
00:07:01.375   3127.855 -  3142.749:   23.0082%  (      570)
00:07:01.375   3142.749 -  3157.644:   24.6827%  (      628)
00:07:01.375   3157.644 -  3172.538:   26.4425%  (      660)
00:07:01.375   3172.538 -  3187.433:   28.5170%  (      778)
00:07:01.375   3187.433 -  3202.327:   30.3488%  (      687)
00:07:01.375   3202.327 -  3217.222:   32.1992%  (      694)
00:07:01.375   3217.222 -  3232.116:   33.9910%  (      672)
00:07:01.375   3232.116 -  3247.011:   35.8415%  (      694)
00:07:01.375   3247.011 -  3261.905:   37.7586%  (      719)
00:07:01.375   3261.905 -  3276.800:   39.6144%  (      696)
00:07:01.375   3276.800 -  3291.695:   41.6516%  (      764)
00:07:01.375   3291.695 -  3306.589:   43.5687%  (      719)
00:07:01.375   3306.589 -  3321.484:   45.3978%  (      686)
00:07:01.375   3321.484 -  3336.378:   47.1363%  (      652)
00:07:01.375   3336.378 -  3351.273:   48.9761%  (      690)
00:07:01.375   3351.273 -  3366.167:   50.7706%  (      673)
00:07:01.375   3366.167 -  3381.062:   52.5597%  (      671)
00:07:01.375   3381.062 -  3395.956:   54.2369%  (      629)
00:07:01.375   3395.956 -  3410.851:   56.0154%  (      667)
00:07:01.375   3410.851 -  3425.745:   57.7512%  (      651)
00:07:01.375   3425.745 -  3440.640:   59.3723%  (      608)
00:07:01.375   3440.640 -  3455.535:   60.9562%  (      594)
00:07:01.375   3455.535 -  3470.429:   62.4947%  (      577)
00:07:01.375   3470.429 -  3485.324:   63.9878%  (      560)
00:07:01.375   3485.324 -  3500.218:   65.3850%  (      524)
00:07:01.375   3500.218 -  3515.113:   66.8489%  (      549)
00:07:01.375   3515.113 -  3530.007:   68.2860%  (      539)
00:07:01.375   3530.007 -  3544.902:   69.5766%  (      484)
00:07:01.375   3544.902 -  3559.796:   70.8804%  (      489)
00:07:01.375   3559.796 -  3574.691:   72.1096%  (      461)
00:07:01.375   3574.691 -  3589.585:   73.1575%  (      393)
00:07:01.375   3589.585 -  3604.480:   74.1548%  (      374)
00:07:01.375   3604.480 -  3619.375:   75.3013%  (      430)
00:07:01.375   3619.375 -  3634.269:   76.2799%  (      367)
00:07:01.375   3634.269 -  3649.164:   77.2504%  (      364)
00:07:01.375   3649.164 -  3664.058:   78.3036%  (      395)
00:07:01.375   3664.058 -  3678.953:   79.2022%  (      337)
00:07:01.375   3678.953 -  3693.847:   80.1301%  (      348)
00:07:01.375   3693.847 -  3708.742:   81.1007%  (      364)
00:07:01.375   3708.742 -  3723.636:   81.9459%  (      317)
00:07:01.375   3723.636 -  3738.531:   82.7618%  (      306)
00:07:01.375   3738.531 -  3753.425:   83.5191%  (      284)
00:07:01.375   3753.425 -  3768.320:   84.3430%  (      309)
00:07:01.375   3768.320 -  3783.215:   85.1589%  (      306)
00:07:01.375   3783.215 -  3798.109:   85.7562%  (      224)
00:07:01.375   3798.109 -  3813.004:   86.3401%  (      219)
00:07:01.375   3813.004 -  3842.793:   87.5000%  (      435)
00:07:01.375   3842.793 -  3872.582:   88.6039%  (      414)
00:07:01.375   3872.582 -  3902.371:   89.5611%  (      359)
00:07:01.375   3902.371 -  3932.160:   90.5023%  (      353)
00:07:01.375   3932.160 -  3961.949:   91.3023%  (      300)
00:07:01.375   3961.949 -  3991.738:   92.0942%  (      297)
00:07:01.375   3991.738 -  4021.527:   92.8354%  (      278)
00:07:01.375   4021.527 -  4051.316:   93.5180%  (      256)
00:07:01.375   4051.316 -  4081.105:   94.0806%  (      211)
00:07:01.375   4081.105 -  4110.895:   94.6219%  (      203)
00:07:01.375   4110.895 -  4140.684:   95.1285%  (      190)
00:07:01.375   4140.684 -  4170.473:   95.6111%  (      181)
00:07:01.375   4170.473 -  4200.262:   96.0191%  (      153)
00:07:01.375   4200.262 -  4230.051:   96.4031%  (      144)
00:07:01.375   4230.051 -  4259.840:   96.7070%  (      114)
00:07:01.375   4259.840 -  4289.629:   96.9497%  (       91)
00:07:01.375   4289.629 -  4319.418:   97.1630%  (       80)
00:07:01.375   4319.418 -  4349.207:   97.3816%  (       82)
00:07:01.375   4349.207 -  4378.996:   97.6216%  (       90)
00:07:01.375   4378.996 -  4408.785:   97.8722%  (       94)
00:07:01.375   4408.785 -  4438.575:   98.0802%  (       78)
00:07:01.375   4438.575 -  4468.364:   98.2429%  (       61)
00:07:01.375   4468.364 -  4498.153:   98.3575%  (       43)
00:07:01.375   4498.153 -  4527.942:   98.4935%  (       51)
00:07:01.375   4527.942 -  4557.731:   98.6455%  (       57)
00:07:01.375   4557.731 -  4587.520:   98.7468%  (       38)
00:07:01.375   4587.520 -  4617.309:   98.8828%  (       51)
00:07:01.375   4617.309 -  4647.098:   98.9868%  (       39)
00:07:01.375   4647.098 -  4676.887:   99.0908%  (       39)
00:07:01.375   4676.887 -  4706.676:   99.1761%  (       32)
00:07:01.375   4706.676 -  4736.465:   99.2347%  (       22)
00:07:01.375   4736.465 -  4766.255:   99.2801%  (       17)
00:07:01.375   4766.255 -  4796.044:   99.3441%  (       24)
00:07:01.375   4796.044 -  4825.833:   99.4081%  (       24)
00:07:01.375   4825.833 -  4855.622:   99.4987%  (       34)
00:07:01.375   4855.622 -  4885.411:   99.5920%  (       35)
00:07:01.375   4885.411 -  4915.200:   99.6214%  (       11)
00:07:01.375   4915.200 -  4944.989:   99.6400%  (        7)
00:07:01.375   4944.989 -  4974.778:   99.6560%  (        6)
00:07:01.375   5362.036 -  5391.825:   99.6614%  (        2)
00:07:01.375   5391.825 -  5421.615:   99.6720%  (        4)
00:07:01.375   5421.615 -  5451.404:   99.6800%  (        3)
00:07:01.375   5451.404 -  5481.193:   99.6880%  (        3)
00:07:01.375   5481.193 -  5510.982:   99.6987%  (        4)
00:07:01.375   5510.982 -  5540.771:   99.7067%  (        3)
00:07:01.375   5540.771 -  5570.560:   99.7174%  (        4)
00:07:01.375   5570.560 -  5600.349:   99.7254%  (        3)
00:07:01.375   5600.349 -  5630.138:   99.7334%  (        3)
00:07:01.375   5630.138 -  5659.927:   99.7440%  (        4)
00:07:01.375   5659.927 -  5689.716:   99.7520%  (        3)
00:07:01.375   5689.716 -  5719.505:   99.7627%  (        4)
00:07:01.375   5719.505 -  5749.295:   99.7707%  (        3)
00:07:01.375   5749.295 -  5779.084:   99.7787%  (        3)
00:07:01.375   5779.084 -  5808.873:   99.7894%  (        4)
00:07:01.375   5808.873 -  5838.662:   99.7974%  (        3)
00:07:01.375   5838.662 -  5868.451:   99.8080%  (        4)
00:07:01.375   5868.451 -  5898.240:   99.8160%  (        3)
00:07:01.375   5898.240 -  5928.029:   99.8267%  (        4)
00:07:01.375   6911.069 -  6940.858:   99.8294%  (        1)
00:07:01.375   7417.484 -  7447.273:   99.8320%  (        1)
00:07:01.375   7447.273 -  7477.062:   99.8427%  (        4)
00:07:01.375   7477.062 -  7506.851:   99.8507%  (        3)
00:07:01.375   7506.851 -  7536.640:   99.8587%  (        3)
00:07:01.375   7536.640 -  7566.429:   99.8693%  (        4)
00:07:01.375   7566.429 -  7596.218:   99.8773%  (        3)
00:07:01.375   7596.218 -  7626.007:   99.8880%  (        4)
00:07:01.375   7626.007 -  7685.585:   99.9040%  (        6)
00:07:01.375   7685.585 -  7745.164:   99.9227%  (        7)
00:07:01.375   7745.164 -  7804.742:   99.9413%  (        7)
00:07:01.375   7804.742 -  7864.320:   99.9600%  (        7)
00:07:01.375   7864.320 -  7923.898:   99.9760%  (        6)
00:07:01.375   7923.898 -  7983.476:   99.9920%  (        6)
00:07:01.375   7983.476 -  8043.055:  100.0000%  (        3)
00:07:01.375  
00:07:01.375   07:51:17 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:07:01.375  
00:07:01.375  real	0m2.421s
00:07:01.375  user	0m2.106s
00:07:01.375  sys	0m0.195s
00:07:01.375   07:51:17 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.375   07:51:17 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:07:01.375  ************************************
00:07:01.375  END TEST nvme_perf
00:07:01.375  ************************************
00:07:01.375   07:51:17 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:07:01.375   07:51:17 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:01.375   07:51:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:01.375   07:51:17 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:01.375  ************************************
00:07:01.375  START TEST nvme_hello_world
00:07:01.375  ************************************
00:07:01.727   07:51:17 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:07:01.727  [2024-11-20 07:51:17.593160] memory.c:1123:vtophys_notify: *ERROR*: could not get phys addr for 0x201000a00000
00:07:01.727  [2024-11-20 07:51:17.593871] nvme_pcie.c: 469:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: spdk_mem_register() failed
00:07:01.727  [2024-11-20 07:51:17.594722] memory.c:1123:vtophys_notify: *ERROR*: could not get phys addr for 0x20100aa00000
00:07:01.727  [2024-11-20 07:51:17.594747] nvme_pcie.c: 469:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: spdk_mem_register() failed
00:07:01.727  Initializing NVMe Controllers
00:07:01.727  Attached to 0000:00:10.0
00:07:01.727    Namespace ID: 1 size: 5GB
00:07:01.727  Attached to 0000:00:11.0
00:07:01.727    Namespace ID: 1 size: 5GB
00:07:01.727  Initialization complete.
00:07:01.727  INFO: using host memory buffer for IO
00:07:01.727  Hello world!
00:07:01.727  INFO: using host memory buffer for IO
00:07:01.727  Hello world!
00:07:01.727  
00:07:01.727  real	0m0.201s
00:07:01.727  user	0m0.062s
00:07:01.727  sys	0m0.086s
00:07:01.727   07:51:17 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.727   07:51:17 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:07:01.727  ************************************
00:07:01.727  END TEST nvme_hello_world
00:07:01.727  ************************************
00:07:01.727   07:51:17 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:07:01.727   07:51:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:01.727   07:51:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:01.727   07:51:17 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:01.727  ************************************
00:07:01.727  START TEST nvme_sgl
00:07:01.727  ************************************
00:07:01.727   07:51:17 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:07:02.006  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:07:02.006  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:07:02.006  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:07:02.006  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:07:02.006  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:07:02.006  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:07:02.006  0000:00:11.0: build_io_request_0 Invalid IO length parameter
00:07:02.006  0000:00:11.0: build_io_request_1 Invalid IO length parameter
00:07:02.006  0000:00:11.0: build_io_request_3 Invalid IO length parameter
00:07:02.006  0000:00:11.0: build_io_request_8 Invalid IO length parameter
00:07:02.006  0000:00:11.0: build_io_request_9 Invalid IO length parameter
00:07:02.006  0000:00:11.0: build_io_request_11 Invalid IO length parameter
00:07:02.006  NVMe Readv/Writev Request test
00:07:02.006  Attached to 0000:00:10.0
00:07:02.006  Attached to 0000:00:11.0
00:07:02.006  0000:00:10.0: build_io_request_2 test passed
00:07:02.006  0000:00:10.0: build_io_request_4 test passed
00:07:02.006  0000:00:10.0: build_io_request_5 test passed
00:07:02.006  0000:00:10.0: build_io_request_6 test passed
00:07:02.006  0000:00:10.0: build_io_request_7 test passed
00:07:02.006  0000:00:10.0: build_io_request_10 test passed
00:07:02.006  0000:00:11.0: build_io_request_2 test passed
00:07:02.006  0000:00:11.0: build_io_request_4 test passed
00:07:02.006  0000:00:11.0: build_io_request_5 test passed
00:07:02.006  0000:00:11.0: build_io_request_6 test passed
00:07:02.006  0000:00:11.0: build_io_request_7 test passed
00:07:02.006  0000:00:11.0: build_io_request_10 test passed
00:07:02.006  Cleaning up...
00:07:02.006  
00:07:02.006  real	0m0.184s
00:07:02.006  user	0m0.067s
00:07:02.006  sys	0m0.079s
00:07:02.006   07:51:17 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:02.006   07:51:17 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:07:02.006  ************************************
00:07:02.006  END TEST nvme_sgl
00:07:02.006  ************************************
00:07:02.006   07:51:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:07:02.006   07:51:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:02.006   07:51:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:02.006   07:51:17 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:02.006  ************************************
00:07:02.006  START TEST nvme_e2edp
00:07:02.006  ************************************
00:07:02.006   07:51:17 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:07:02.265  NVMe Write/Read with End-to-End data protection test
00:07:02.265  Attached to 0000:00:10.0
00:07:02.265  Attached to 0000:00:11.0
00:07:02.265  Cleaning up...
00:07:02.265  
00:07:02.265  real	0m0.186s
00:07:02.265  user	0m0.051s
00:07:02.265  sys	0m0.088s
00:07:02.265   07:51:18 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:02.265   07:51:18 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:07:02.265  ************************************
00:07:02.265  END TEST nvme_e2edp
00:07:02.265  ************************************
00:07:02.265   07:51:18 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:07:02.265   07:51:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:02.265   07:51:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:02.265   07:51:18 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:02.265  ************************************
00:07:02.265  START TEST nvme_reserve
00:07:02.265  ************************************
00:07:02.265   07:51:18 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:07:02.523  =====================================================
00:07:02.523  NVMe Controller at PCI bus 0, device 16, function 0
00:07:02.523  =====================================================
00:07:02.523  Reservations:                Not Supported
00:07:02.523  =====================================================
00:07:02.523  NVMe Controller at PCI bus 0, device 17, function 0
00:07:02.523  =====================================================
00:07:02.523  Reservations:                Not Supported
00:07:02.523  Reservation test passed
00:07:02.523  
00:07:02.523  real	0m0.178s
00:07:02.523  user	0m0.049s
00:07:02.523  sys	0m0.088s
00:07:02.523   07:51:18 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:02.523   07:51:18 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:07:02.523  ************************************
00:07:02.523  END TEST nvme_reserve
00:07:02.523  ************************************
00:07:02.523   07:51:18 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:07:02.523   07:51:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:02.523   07:51:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:02.523   07:51:18 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:02.523  ************************************
00:07:02.523  START TEST nvme_err_injection
00:07:02.523  ************************************
00:07:02.524   07:51:18 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:07:02.783  NVMe Error Injection test
00:07:02.783  Attached to 0000:00:10.0
00:07:02.783  Attached to 0000:00:11.0
00:07:02.783  0000:00:10.0: get features failed as expected
00:07:02.783  0000:00:11.0: get features failed as expected
00:07:02.783  0000:00:11.0: get features successfully as expected
00:07:02.783  0000:00:10.0: get features successfully as expected
00:07:02.783  0000:00:11.0: read failed as expected
00:07:02.783  0000:00:10.0: read failed as expected
00:07:02.783  0000:00:10.0: read successfully as expected
00:07:02.783  0000:00:11.0: read successfully as expected
00:07:02.783  Cleaning up...
00:07:02.783  
00:07:02.783  real	0m0.203s
00:07:02.783  user	0m0.061s
00:07:02.783  sys	0m0.092s
00:07:02.783   07:51:18 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:02.783   07:51:18 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:07:02.783  ************************************
00:07:02.783  END TEST nvme_err_injection
00:07:02.783  ************************************
00:07:02.783   07:51:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:07:02.783   07:51:18 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:07:02.783   07:51:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:02.783   07:51:18 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:02.783  ************************************
00:07:02.783  START TEST nvme_overhead
00:07:02.783  ************************************
00:07:02.783   07:51:18 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:07:03.733  Initializing NVMe Controllers
00:07:03.733  Attached to 0000:00:10.0
00:07:03.733  Attached to 0000:00:11.0
00:07:03.733  Initialization complete. Launching workers.
00:07:03.733  submit (in ns)   avg, min, max =  11515.4,   9542.7,  80945.5
00:07:03.733  complete (in ns) avg, min, max =   8475.6,   6957.3,  73760.0
00:07:03.733  
00:07:03.733  Submit histogram
00:07:03.733  ================
00:07:03.733         Range in us     Cumulative     Count
00:07:03.733      9.542 -     9.600:    0.0087%  (        1)
00:07:03.733      9.716 -     9.775:    0.0434%  (        4)
00:07:03.733      9.775 -     9.833:    0.2603%  (       25)
00:07:03.733      9.833 -     9.891:    1.4663%  (      139)
00:07:03.733      9.891 -     9.949:    6.0559%  (      529)
00:07:03.733      9.949 -    10.007:   13.6908%  (      880)
00:07:03.733     10.007 -    10.065:   22.4796%  (     1013)
00:07:03.733     10.065 -    10.124:   30.1926%  (      889)
00:07:03.733     10.124 -    10.182:   36.7951%  (      761)
00:07:03.733     10.182 -    10.240:   41.9660%  (      596)
00:07:03.733     10.240 -    10.298:   46.0437%  (      470)
00:07:03.733     10.298 -    10.356:   49.2105%  (      365)
00:07:03.733     10.356 -    10.415:   51.5443%  (      269)
00:07:03.733     10.415 -    10.473:   53.0626%  (      175)
00:07:03.733     10.473 -    10.531:   54.6070%  (      178)
00:07:03.733     10.531 -    10.589:   55.5613%  (      110)
00:07:03.733     10.589 -    10.647:   56.4984%  (      108)
00:07:03.733     10.647 -    10.705:   57.3139%  (       94)
00:07:03.733     10.705 -    10.764:   57.9646%  (       75)
00:07:03.733     10.764 -    10.822:   58.4591%  (       57)
00:07:03.733     10.822 -    10.880:   58.8149%  (       41)
00:07:03.733     10.880 -    10.938:   59.1445%  (       38)
00:07:03.733     10.938 -    10.996:   59.5523%  (       47)
00:07:03.733     10.996 -    11.055:   59.6825%  (       15)
00:07:03.733     11.055 -    11.113:   59.8473%  (       19)
00:07:03.733     11.113 -    11.171:   59.9341%  (       10)
00:07:03.733     11.171 -    11.229:   60.0555%  (       14)
00:07:03.733     11.229 -    11.287:   60.1510%  (       11)
00:07:03.733     11.287 -    11.345:   60.2290%  (        9)
00:07:03.733     11.345 -    11.404:   60.3158%  (       10)
00:07:03.733     11.404 -    11.462:   60.3679%  (        6)
00:07:03.733     11.462 -    11.520:   60.4373%  (        8)
00:07:03.733     11.520 -    11.578:   60.4807%  (        5)
00:07:03.733     11.578 -    11.636:   60.5067%  (        3)
00:07:03.733     11.636 -    11.695:   60.5154%  (        1)
00:07:03.733     11.695 -    11.753:   60.5327%  (        2)
00:07:03.733     11.753 -    11.811:   60.5501%  (        2)
00:07:03.733     11.811 -    11.869:   60.5934%  (        5)
00:07:03.733     11.869 -    11.927:   60.7062%  (       13)
00:07:03.733     11.927 -    11.985:   61.3222%  (       71)
00:07:03.733     11.985 -    12.044:   62.9880%  (      192)
00:07:03.733     12.044 -    12.102:   66.2329%  (      374)
00:07:03.733     12.102 -    12.160:   69.7467%  (      405)
00:07:03.733     12.160 -    12.218:   72.5924%  (      328)
00:07:03.733     12.218 -    12.276:   75.1605%  (      296)
00:07:03.733     12.276 -    12.335:   77.1647%  (      231)
00:07:03.733     12.335 -    12.393:   78.7264%  (      180)
00:07:03.733     12.393 -    12.451:   79.6807%  (      110)
00:07:03.733     12.451 -    12.509:   80.3922%  (       82)
00:07:03.733     12.509 -    12.567:   80.8780%  (       56)
00:07:03.733     12.567 -    12.625:   81.3292%  (       52)
00:07:03.733     12.625 -    12.684:   81.6502%  (       37)
00:07:03.733     12.684 -    12.742:   81.9105%  (       30)
00:07:03.733     12.742 -    12.800:   82.1100%  (       23)
00:07:03.733     12.800 -    12.858:   82.3356%  (       26)
00:07:03.733     12.858 -    12.916:   82.8995%  (       65)
00:07:03.733     12.916 -    12.975:   84.1836%  (      148)
00:07:03.733     12.975 -    13.033:   85.9101%  (      199)
00:07:03.733     13.033 -    13.091:   87.6106%  (      196)
00:07:03.733     13.091 -    13.149:   88.9467%  (      154)
00:07:03.733     13.149 -    13.207:   89.8317%  (      102)
00:07:03.733     13.207 -    13.265:   90.6559%  (       95)
00:07:03.733     13.265 -    13.324:   91.2546%  (       69)
00:07:03.733     13.324 -    13.382:   91.7231%  (       54)
00:07:03.733     13.382 -    13.440:   92.0788%  (       41)
00:07:03.733     13.440 -    13.498:   92.3738%  (       34)
00:07:03.733     13.498 -    13.556:   92.5560%  (       21)
00:07:03.733     13.556 -    13.615:   92.6861%  (       15)
00:07:03.733     13.615 -    13.673:   92.8596%  (       20)
00:07:03.733     13.673 -    13.731:   92.9898%  (       15)
00:07:03.733     13.731 -    13.789:   93.1026%  (       13)
00:07:03.734     13.789 -    13.847:   93.1806%  (        9)
00:07:03.734     13.847 -    13.905:   93.2761%  (       11)
00:07:03.734     13.905 -    13.964:   93.3281%  (        6)
00:07:03.734     13.964 -    14.022:   93.3889%  (        7)
00:07:03.734     14.022 -    14.080:   93.4062%  (        2)
00:07:03.734     14.080 -    14.138:   93.4322%  (        3)
00:07:03.734     14.138 -    14.196:   93.4843%  (        6)
00:07:03.734     14.196 -    14.255:   93.5103%  (        3)
00:07:03.734     14.255 -    14.313:   93.5450%  (        4)
00:07:03.734     14.313 -    14.371:   93.5884%  (        5)
00:07:03.734     14.371 -    14.429:   93.6058%  (        2)
00:07:03.734     14.429 -    14.487:   93.6491%  (        5)
00:07:03.734     14.487 -    14.545:   93.6925%  (        5)
00:07:03.734     14.545 -    14.604:   93.7533%  (        7)
00:07:03.734     14.604 -    14.662:   93.7966%  (        5)
00:07:03.734     14.662 -    14.720:   93.8487%  (        6)
00:07:03.734     14.720 -    14.778:   93.8747%  (        3)
00:07:03.734     14.778 -    14.836:   93.9007%  (        3)
00:07:03.734     14.836 -    14.895:   93.9441%  (        5)
00:07:03.734     14.895 -    15.011:   94.0309%  (       10)
00:07:03.734     15.011 -    15.127:   94.1003%  (        8)
00:07:03.734     15.127 -    15.244:   94.2044%  (       12)
00:07:03.734     15.244 -    15.360:   94.3866%  (       21)
00:07:03.734     15.360 -    15.476:   94.5167%  (       15)
00:07:03.734     15.476 -    15.593:   94.6035%  (       10)
00:07:03.734     15.593 -    15.709:   94.7510%  (       17)
00:07:03.734     15.709 -    15.825:   94.8464%  (       11)
00:07:03.734     15.825 -    15.942:   94.9419%  (       11)
00:07:03.734     15.942 -    16.058:   95.0460%  (       12)
00:07:03.734     16.058 -    16.175:   95.1501%  (       12)
00:07:03.734     16.175 -    16.291:   95.2802%  (       15)
00:07:03.734     16.291 -    16.407:   95.4277%  (       17)
00:07:03.734     16.407 -    16.524:   95.5145%  (       10)
00:07:03.734     16.524 -    16.640:   95.6186%  (       12)
00:07:03.734     16.640 -    16.756:   95.6880%  (        8)
00:07:03.734     16.756 -    16.873:   95.8789%  (       22)
00:07:03.734     16.873 -    16.989:   95.9917%  (       13)
00:07:03.734     16.989 -    17.105:   96.1305%  (       16)
00:07:03.734     17.105 -    17.222:   96.1999%  (        8)
00:07:03.734     17.222 -    17.338:   96.3821%  (       21)
00:07:03.734     17.338 -    17.455:   96.5556%  (       20)
00:07:03.734     17.455 -    17.571:   96.6684%  (       13)
00:07:03.734     17.571 -    17.687:   96.7899%  (       14)
00:07:03.734     17.687 -    17.804:   96.9374%  (       17)
00:07:03.734     17.804 -    17.920:   97.0241%  (       10)
00:07:03.734     17.920 -    18.036:   97.1369%  (       13)
00:07:03.734     18.036 -    18.153:   97.2237%  (       10)
00:07:03.734     18.153 -    18.269:   97.2931%  (        8)
00:07:03.734     18.269 -    18.385:   97.3972%  (       12)
00:07:03.734     18.385 -    18.502:   97.4839%  (       10)
00:07:03.734     18.502 -    18.618:   97.6314%  (       17)
00:07:03.734     18.618 -    18.735:   97.6922%  (        7)
00:07:03.734     18.735 -    18.851:   97.7963%  (       12)
00:07:03.734     18.851 -    18.967:   97.8570%  (        7)
00:07:03.734     18.967 -    19.084:   97.9698%  (       13)
00:07:03.734     19.084 -    19.200:   98.0739%  (       12)
00:07:03.734     19.200 -    19.316:   98.1780%  (       12)
00:07:03.734     19.316 -    19.433:   98.2648%  (       10)
00:07:03.734     19.433 -    19.549:   98.3255%  (        7)
00:07:03.734     19.549 -    19.665:   98.3949%  (        8)
00:07:03.734     19.665 -    19.782:   98.4730%  (        9)
00:07:03.734     19.782 -    19.898:   98.5337%  (        7)
00:07:03.734     19.898 -    20.015:   98.5685%  (        4)
00:07:03.734     20.015 -    20.131:   98.6465%  (        9)
00:07:03.734     20.131 -    20.247:   98.6899%  (        5)
00:07:03.734     20.247 -    20.364:   98.8027%  (       13)
00:07:03.734     20.364 -    20.480:   98.8287%  (        3)
00:07:03.734     20.480 -    20.596:   98.8895%  (        7)
00:07:03.734     20.596 -    20.713:   98.9502%  (        7)
00:07:03.734     20.713 -    20.829:   98.9762%  (        3)
00:07:03.734     20.829 -    20.945:   99.0023%  (        3)
00:07:03.734     20.945 -    21.062:   99.0543%  (        6)
00:07:03.734     21.062 -    21.178:   99.0890%  (        4)
00:07:03.734     21.178 -    21.295:   99.1411%  (        6)
00:07:03.734     21.295 -    21.411:   99.1758%  (        4)
00:07:03.734     21.411 -    21.527:   99.1931%  (        2)
00:07:03.734     21.644 -    21.760:   99.2192%  (        3)
00:07:03.734     21.760 -    21.876:   99.2625%  (        5)
00:07:03.734     21.876 -    21.993:   99.2886%  (        3)
00:07:03.734     21.993 -    22.109:   99.3493%  (        7)
00:07:03.734     22.109 -    22.225:   99.3580%  (        1)
00:07:03.734     22.225 -    22.342:   99.4187%  (        7)
00:07:03.734     22.342 -    22.458:   99.4361%  (        2)
00:07:03.734     22.458 -    22.575:   99.4534%  (        2)
00:07:03.734     22.575 -    22.691:   99.4794%  (        3)
00:07:03.734     22.691 -    22.807:   99.5055%  (        3)
00:07:03.734     22.924 -    23.040:   99.5228%  (        2)
00:07:03.734     23.040 -    23.156:   99.5488%  (        3)
00:07:03.734     23.389 -    23.505:   99.5662%  (        2)
00:07:03.734     23.505 -    23.622:   99.6009%  (        4)
00:07:03.734     23.622 -    23.738:   99.6183%  (        2)
00:07:03.734     23.738 -    23.855:   99.6269%  (        1)
00:07:03.734     23.971 -    24.087:   99.6443%  (        2)
00:07:03.734     24.087 -    24.204:   99.6530%  (        1)
00:07:03.734     24.204 -    24.320:   99.6616%  (        1)
00:07:03.734     24.320 -    24.436:   99.6790%  (        2)
00:07:03.734     24.436 -    24.553:   99.6877%  (        1)
00:07:03.734     24.669 -    24.785:   99.7050%  (        2)
00:07:03.734     25.018 -    25.135:   99.7137%  (        1)
00:07:03.734     25.135 -    25.251:   99.7224%  (        1)
00:07:03.734     25.484 -    25.600:   99.7310%  (        1)
00:07:03.734     25.600 -    25.716:   99.7484%  (        2)
00:07:03.734     25.716 -    25.833:   99.7571%  (        1)
00:07:03.734     25.833 -    25.949:   99.7657%  (        1)
00:07:03.734     26.415 -    26.531:   99.7744%  (        1)
00:07:03.734     26.647 -    26.764:   99.7918%  (        2)
00:07:03.734     26.764 -    26.880:   99.8005%  (        1)
00:07:03.734     26.996 -    27.113:   99.8091%  (        1)
00:07:03.734     27.345 -    27.462:   99.8178%  (        1)
00:07:03.734     27.578 -    27.695:   99.8265%  (        1)
00:07:03.734     27.811 -    27.927:   99.8352%  (        1)
00:07:03.734     29.207 -    29.324:   99.8438%  (        1)
00:07:03.734     29.673 -    29.789:   99.8525%  (        1)
00:07:03.734     29.789 -    30.022:   99.8612%  (        1)
00:07:03.734     30.720 -    30.953:   99.8699%  (        1)
00:07:03.734     31.651 -    31.884:   99.8785%  (        1)
00:07:03.734     33.513 -    33.745:   99.8959%  (        2)
00:07:03.734     35.142 -    35.375:   99.9132%  (        2)
00:07:03.734     36.073 -    36.305:   99.9306%  (        2)
00:07:03.734     37.236 -    37.469:   99.9393%  (        1)
00:07:03.734     37.702 -    37.935:   99.9479%  (        1)
00:07:03.734     37.935 -    38.167:   99.9566%  (        1)
00:07:03.734     43.985 -    44.218:   99.9653%  (        1)
00:07:03.734     45.847 -    46.080:   99.9740%  (        1)
00:07:03.734     50.967 -    51.200:   99.9826%  (        1)
00:07:03.734     54.924 -    55.156:   99.9913%  (        1)
00:07:03.734     80.524 -    80.989:  100.0000%  (        1)
00:07:03.734  
00:07:03.734  Complete histogram
00:07:03.734  ==================
00:07:03.734         Range in us     Cumulative     Count
00:07:03.734      6.953 -     6.982:    0.0087%  (        1)
00:07:03.734      6.982 -     7.011:    0.0174%  (        1)
00:07:03.734      7.011 -     7.040:    0.0694%  (        6)
00:07:03.734      7.040 -     7.069:    0.4685%  (       46)
00:07:03.734      7.069 -     7.098:    1.4749%  (      116)
00:07:03.734      7.098 -     7.127:    3.8955%  (      279)
00:07:03.734      7.127 -     7.156:    7.1838%  (      379)
00:07:03.734      7.156 -     7.185:   10.9665%  (      436)
00:07:03.734      7.185 -     7.215:   15.6082%  (      535)
00:07:03.734      7.215 -     7.244:   20.6403%  (      580)
00:07:03.734      7.244 -     7.273:   24.9349%  (      495)
00:07:03.734      7.273 -     7.302:   29.5246%  (      529)
00:07:03.734      7.302 -     7.331:   33.4201%  (      449)
00:07:03.734      7.331 -     7.360:   37.0380%  (      417)
00:07:03.734      7.360 -     7.389:   39.7536%  (      313)
00:07:03.734      7.389 -     7.418:   42.0094%  (      260)
00:07:03.734      7.418 -     7.447:   43.9528%  (      224)
00:07:03.734      7.447 -     7.505:   47.0154%  (      353)
00:07:03.734      7.505 -     7.564:   49.4361%  (      279)
00:07:03.734      7.564 -     7.622:   50.9977%  (      180)
00:07:03.734      7.622 -     7.680:   52.3165%  (      152)
00:07:03.734      7.680 -     7.738:   53.7654%  (      167)
00:07:03.734      7.738 -     7.796:   54.5809%  (       94)
00:07:03.734      7.796 -     7.855:   55.5700%  (      114)
00:07:03.734      7.855 -     7.913:   56.4550%  (      102)
00:07:03.734      7.913 -     7.971:   56.9322%  (       55)
00:07:03.734      7.971 -     8.029:   57.4180%  (       56)
00:07:03.734      8.029 -     8.087:   57.8518%  (       50)
00:07:03.734      8.087 -     8.145:   58.1555%  (       35)
00:07:03.734      8.145 -     8.204:   58.4158%  (       30)
00:07:03.734      8.204 -     8.262:   58.6153%  (       23)
00:07:03.734      8.262 -     8.320:   58.8582%  (       28)
00:07:03.734      8.320 -     8.378:   59.4395%  (       67)
00:07:03.734      8.378 -     8.436:   61.3656%  (      222)
00:07:03.734      8.436 -     8.495:   63.6561%  (      264)
00:07:03.734      8.495 -     8.553:   65.5995%  (      224)
00:07:03.734      8.553 -     8.611:   66.9009%  (      150)
00:07:03.734      8.611 -     8.669:   67.7512%  (       98)
00:07:03.734      8.669 -     8.727:   68.3845%  (       73)
00:07:03.734      8.727 -     8.785:   69.7900%  (      162)
00:07:03.734      8.785 -     8.844:   72.7312%  (      339)
00:07:03.734      8.844 -     8.902:   75.7505%  (      348)
00:07:03.734      8.902 -     8.960:   79.4118%  (      422)
00:07:03.734      8.960 -     9.018:   82.4571%  (      351)
00:07:03.734      9.018 -     9.076:   84.7475%  (      264)
00:07:03.734      9.076 -     9.135:   86.2398%  (      172)
00:07:03.734      9.135 -     9.193:   87.2549%  (      117)
00:07:03.734      9.193 -     9.251:   87.8449%  (       68)
00:07:03.734      9.251 -     9.309:   88.3307%  (       56)
00:07:03.734      9.309 -     9.367:   88.6951%  (       42)
00:07:03.735      9.367 -     9.425:   88.9988%  (       35)
00:07:03.735      9.425 -     9.484:   89.3458%  (       40)
00:07:03.735      9.484 -     9.542:   89.6842%  (       39)
00:07:03.735      9.542 -     9.600:   90.0226%  (       39)
00:07:03.735      9.600 -     9.658:   90.2655%  (       28)
00:07:03.735      9.658 -     9.716:   90.5431%  (       32)
00:07:03.735      9.716 -     9.775:   90.7947%  (       29)
00:07:03.735      9.775 -     9.833:   90.9769%  (       21)
00:07:03.735      9.833 -     9.891:   91.1331%  (       18)
00:07:03.735      9.891 -     9.949:   91.2285%  (       11)
00:07:03.735      9.949 -    10.007:   91.3240%  (       11)
00:07:03.735     10.007 -    10.065:   91.4628%  (       16)
00:07:03.735     10.065 -    10.124:   91.5495%  (       10)
00:07:03.735     10.124 -    10.182:   91.6276%  (        9)
00:07:03.735     10.182 -    10.240:   91.6884%  (        7)
00:07:03.735     10.240 -    10.298:   91.7751%  (       10)
00:07:03.735     10.298 -    10.356:   91.8706%  (       11)
00:07:03.735     10.356 -    10.415:   91.9833%  (       13)
00:07:03.735     10.415 -    10.473:   92.0441%  (        7)
00:07:03.735     10.473 -    10.531:   92.1135%  (        8)
00:07:03.735     10.531 -    10.589:   92.2263%  (       13)
00:07:03.735     10.589 -    10.647:   92.3217%  (       11)
00:07:03.735     10.647 -    10.705:   92.3824%  (        7)
00:07:03.735     10.705 -    10.764:   92.4258%  (        5)
00:07:03.735     10.764 -    10.822:   92.4779%  (        6)
00:07:03.735     10.822 -    10.880:   92.6080%  (       15)
00:07:03.735     10.880 -    10.938:   92.7468%  (       16)
00:07:03.735     10.938 -    10.996:   92.8423%  (       11)
00:07:03.735     10.996 -    11.055:   92.9898%  (       17)
00:07:03.735     11.055 -    11.113:   93.1199%  (       15)
00:07:03.735     11.113 -    11.171:   93.2414%  (       14)
00:07:03.735     11.171 -    11.229:   93.3368%  (       11)
00:07:03.735     11.229 -    11.287:   93.4149%  (        9)
00:07:03.735     11.287 -    11.345:   93.4583%  (        5)
00:07:03.735     11.345 -    11.404:   93.5103%  (        6)
00:07:03.735     11.404 -    11.462:   93.6405%  (       15)
00:07:03.735     11.462 -    11.520:   93.7099%  (        8)
00:07:03.735     11.520 -    11.578:   93.7793%  (        8)
00:07:03.735     11.578 -    11.636:   93.8053%  (        3)
00:07:03.735     11.636 -    11.695:   93.9007%  (       11)
00:07:03.735     11.695 -    11.753:   93.9528%  (        6)
00:07:03.735     11.753 -    11.811:   93.9875%  (        4)
00:07:03.735     11.811 -    11.869:   94.0396%  (        6)
00:07:03.735     11.869 -    11.927:   94.0743%  (        4)
00:07:03.735     11.927 -    11.985:   94.1437%  (        8)
00:07:03.735     11.985 -    12.044:   94.1871%  (        5)
00:07:03.735     12.102 -    12.160:   94.2218%  (        4)
00:07:03.735     12.160 -    12.218:   94.2478%  (        3)
00:07:03.735     12.218 -    12.276:   94.2825%  (        4)
00:07:03.735     12.276 -    12.335:   94.3085%  (        3)
00:07:03.735     12.335 -    12.393:   94.3259%  (        2)
00:07:03.735     12.393 -    12.451:   94.3606%  (        4)
00:07:03.735     12.451 -    12.509:   94.4213%  (        7)
00:07:03.735     12.509 -    12.567:   94.4820%  (        7)
00:07:03.735     12.567 -    12.625:   94.5341%  (        6)
00:07:03.735     12.625 -    12.684:   94.6035%  (        8)
00:07:03.735     12.684 -    12.742:   94.6469%  (        5)
00:07:03.735     12.742 -    12.800:   94.7076%  (        7)
00:07:03.735     12.800 -    12.858:   94.7683%  (        7)
00:07:03.735     12.858 -    12.916:   94.8117%  (        5)
00:07:03.735     12.916 -    12.975:   94.8638%  (        6)
00:07:03.735     12.975 -    13.033:   94.9505%  (       10)
00:07:03.735     13.033 -    13.091:   95.0460%  (       11)
00:07:03.735     13.091 -    13.149:   95.0980%  (        6)
00:07:03.735     13.149 -    13.207:   95.1588%  (        7)
00:07:03.735     13.207 -    13.265:   95.2022%  (        5)
00:07:03.735     13.265 -    13.324:   95.2282%  (        3)
00:07:03.735     13.324 -    13.382:   95.2716%  (        5)
00:07:03.735     13.382 -    13.440:   95.3063%  (        4)
00:07:03.735     13.440 -    13.498:   95.3757%  (        8)
00:07:03.735     13.498 -    13.556:   95.4364%  (        7)
00:07:03.735     13.556 -    13.615:   95.4971%  (        7)
00:07:03.735     13.615 -    13.673:   95.5579%  (        7)
00:07:03.735     13.673 -    13.731:   95.6186%  (        7)
00:07:03.735     13.731 -    13.789:   95.6793%  (        7)
00:07:03.735     13.789 -    13.847:   95.7661%  (       10)
00:07:03.735     13.847 -    13.905:   95.8095%  (        5)
00:07:03.735     13.905 -    13.964:   95.8789%  (        8)
00:07:03.735     13.964 -    14.022:   95.9656%  (       10)
00:07:03.735     14.022 -    14.080:   96.0437%  (        9)
00:07:03.735     14.080 -    14.138:   96.1131%  (        8)
00:07:03.735     14.138 -    14.196:   96.1392%  (        3)
00:07:03.735     14.196 -    14.255:   96.1912%  (        6)
00:07:03.735     14.255 -    14.313:   96.2433%  (        6)
00:07:03.735     14.313 -    14.371:   96.3474%  (       12)
00:07:03.735     14.371 -    14.429:   96.3734%  (        3)
00:07:03.735     14.429 -    14.487:   96.4255%  (        6)
00:07:03.735     14.487 -    14.545:   96.5036%  (        9)
00:07:03.735     14.545 -    14.604:   96.5816%  (        9)
00:07:03.735     14.604 -    14.662:   96.6510%  (        8)
00:07:03.735     14.662 -    14.720:   96.7031%  (        6)
00:07:03.735     14.720 -    14.778:   96.7638%  (        7)
00:07:03.735     14.778 -    14.836:   96.8246%  (        7)
00:07:03.735     14.836 -    14.895:   96.8766%  (        6)
00:07:03.735     14.895 -    15.011:   96.9721%  (       11)
00:07:03.735     15.011 -    15.127:   97.0415%  (        8)
00:07:03.735     15.127 -    15.244:   97.1282%  (       10)
00:07:03.735     15.244 -    15.360:   97.2237%  (       11)
00:07:03.735     15.360 -    15.476:   97.3451%  (       14)
00:07:03.735     15.476 -    15.593:   97.4839%  (       16)
00:07:03.735     15.593 -    15.709:   97.5794%  (       11)
00:07:03.735     15.709 -    15.825:   97.6401%  (        7)
00:07:03.735     15.825 -    15.942:   97.7356%  (       11)
00:07:03.735     15.942 -    16.058:   97.8657%  (       15)
00:07:03.735     16.058 -    16.175:   97.9004%  (        4)
00:07:03.735     16.175 -    16.291:   97.9698%  (        8)
00:07:03.735     16.291 -    16.407:   98.0566%  (       10)
00:07:03.735     16.407 -    16.524:   98.1520%  (       11)
00:07:03.735     16.524 -    16.640:   98.2301%  (        9)
00:07:03.735     16.640 -    16.756:   98.2821%  (        6)
00:07:03.735     16.756 -    16.873:   98.3776%  (       11)
00:07:03.735     16.873 -    16.989:   98.4643%  (       10)
00:07:03.735     16.989 -    17.105:   98.5077%  (        5)
00:07:03.735     17.105 -    17.222:   98.5598%  (        6)
00:07:03.735     17.222 -    17.338:   98.6118%  (        6)
00:07:03.735     17.338 -    17.455:   98.7159%  (       12)
00:07:03.735     17.455 -    17.571:   98.7680%  (        6)
00:07:03.735     17.571 -    17.687:   98.8461%  (        9)
00:07:03.735     17.687 -    17.804:   98.8895%  (        5)
00:07:03.735     17.804 -    17.920:   98.9676%  (        9)
00:07:03.735     17.920 -    18.036:   98.9936%  (        3)
00:07:03.735     18.036 -    18.153:   99.0109%  (        2)
00:07:03.735     18.153 -    18.269:   99.0890%  (        9)
00:07:03.735     18.269 -    18.385:   99.1237%  (        4)
00:07:03.735     18.385 -    18.502:   99.1758%  (        6)
00:07:03.735     18.502 -    18.618:   99.2278%  (        6)
00:07:03.735     18.618 -    18.735:   99.2625%  (        4)
00:07:03.735     18.735 -    18.851:   99.2886%  (        3)
00:07:03.735     18.851 -    18.967:   99.3406%  (        6)
00:07:03.735     18.967 -    19.084:   99.3580%  (        2)
00:07:03.735     19.084 -    19.200:   99.3753%  (        2)
00:07:03.735     19.200 -    19.316:   99.3927%  (        2)
00:07:03.735     19.316 -    19.433:   99.4100%  (        2)
00:07:03.735     19.433 -    19.549:   99.4534%  (        5)
00:07:03.735     19.665 -    19.782:   99.4881%  (        4)
00:07:03.735     19.782 -    19.898:   99.5055%  (        2)
00:07:03.735     19.898 -    20.015:   99.5141%  (        1)
00:07:03.735     20.015 -    20.131:   99.5315%  (        2)
00:07:03.735     20.131 -    20.247:   99.5402%  (        1)
00:07:03.735     20.247 -    20.364:   99.5575%  (        2)
00:07:03.735     20.364 -    20.480:   99.5662%  (        1)
00:07:03.735     20.480 -    20.596:   99.5836%  (        2)
00:07:03.735     20.596 -    20.713:   99.5922%  (        1)
00:07:03.735     20.829 -    20.945:   99.6009%  (        1)
00:07:03.735     20.945 -    21.062:   99.6096%  (        1)
00:07:03.735     21.062 -    21.178:   99.6183%  (        1)
00:07:03.735     21.295 -    21.411:   99.6269%  (        1)
00:07:03.735     21.411 -    21.527:   99.6356%  (        1)
00:07:03.735     21.527 -    21.644:   99.6443%  (        1)
00:07:03.735     21.876 -    21.993:   99.6790%  (        4)
00:07:03.735     21.993 -    22.109:   99.6877%  (        1)
00:07:03.735     22.225 -    22.342:   99.6963%  (        1)
00:07:03.735     22.458 -    22.575:   99.7050%  (        1)
00:07:03.735     22.691 -    22.807:   99.7137%  (        1)
00:07:03.735     22.924 -    23.040:   99.7310%  (        2)
00:07:03.735     23.156 -    23.273:   99.7397%  (        1)
00:07:03.735     23.273 -    23.389:   99.7571%  (        2)
00:07:03.735     23.855 -    23.971:   99.7657%  (        1)
00:07:03.735     24.204 -    24.320:   99.7744%  (        1)
00:07:03.735     24.320 -    24.436:   99.7831%  (        1)
00:07:03.735     24.436 -    24.553:   99.7918%  (        1)
00:07:03.735     24.553 -    24.669:   99.8005%  (        1)
00:07:03.735     26.415 -    26.531:   99.8091%  (        1)
00:07:03.735     27.695 -    27.811:   99.8178%  (        1)
00:07:03.735     27.811 -    27.927:   99.8265%  (        1)
00:07:03.735     27.927 -    28.044:   99.8352%  (        1)
00:07:03.735     28.975 -    29.091:   99.8438%  (        1)
00:07:03.735     29.091 -    29.207:   99.8525%  (        1)
00:07:03.735     29.324 -    29.440:   99.8612%  (        1)
00:07:03.735     30.255 -    30.487:   99.8699%  (        1)
00:07:03.735     31.884 -    32.116:   99.8785%  (        1)
00:07:03.735     32.116 -    32.349:   99.8959%  (        2)
00:07:03.735     32.582 -    32.815:   99.9046%  (        1)
00:07:03.735     33.280 -    33.513:   99.9132%  (        1)
00:07:03.735     33.745 -    33.978:   99.9219%  (        1)
00:07:03.735     34.676 -    34.909:   99.9306%  (        1)
00:07:03.735     35.375 -    35.607:   99.9393%  (        1)
00:07:03.735     35.607 -    35.840:   99.9566%  (        2)
00:07:03.735     37.702 -    37.935:   99.9653%  (        1)
00:07:03.994     40.029 -    40.262:   99.9740%  (        1)
00:07:03.994     44.916 -    45.149:   99.9826%  (        1)
00:07:03.994     54.924 -    55.156:   99.9913%  (        1)
00:07:03.994     73.542 -    74.007:  100.0000%  (        1)
00:07:03.994  
00:07:03.994  
00:07:03.994  real	0m1.169s
00:07:03.994  user	0m1.038s
00:07:03.994  sys	0m0.080s
00:07:03.994   07:51:19 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:03.994   07:51:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:07:03.994  ************************************
00:07:03.994  END TEST nvme_overhead
00:07:03.994  ************************************
00:07:03.994   07:51:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:07:03.994   07:51:19 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:03.994   07:51:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:03.994   07:51:19 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:03.994  ************************************
00:07:03.994  START TEST nvme_arbitration
00:07:03.994  ************************************
00:07:03.994   07:51:19 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:07:07.282  Initializing NVMe Controllers
00:07:07.282  Attached to 0000:00:10.0
00:07:07.282  Attached to 0000:00:11.0
00:07:07.282  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:07:07.282  Associating QEMU NVMe Ctrl       (12341               ) with lcore 1
00:07:07.282  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:07:07.282  Associating QEMU NVMe Ctrl       (12341               ) with lcore 3
00:07:07.282  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:07:07.282  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:07:07.282  Initialization complete. Launching workers.
00:07:07.282  Starting thread on core 1 with urgent priority queue
00:07:07.282  Starting thread on core 2 with urgent priority queue
00:07:07.282  Starting thread on core 3 with urgent priority queue
00:07:07.282  Starting thread on core 0 with urgent priority queue
00:07:07.282  QEMU NVMe Ctrl       (12340               ) core 0:  6314.67 IO/s    15.84 secs/100000 ios
00:07:07.282  QEMU NVMe Ctrl       (12341               ) core 1:  6186.67 IO/s    16.16 secs/100000 ios
00:07:07.282  QEMU NVMe Ctrl       (12340               ) core 2:  3200.00 IO/s    31.25 secs/100000 ios
00:07:07.282  QEMU NVMe Ctrl       (12341               ) core 3:  3178.67 IO/s    31.46 secs/100000 ios
00:07:07.282  ========================================================
00:07:07.282  
00:07:07.282  ************************************
00:07:07.282  END TEST nvme_arbitration
00:07:07.282  ************************************
00:07:07.282  
00:07:07.282  real	0m3.208s
00:07:07.282  user	0m9.010s
00:07:07.282  sys	0m0.091s
00:07:07.282   07:51:23 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:07.282   07:51:23 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:07:07.282   07:51:23 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:07:07.282   07:51:23 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:07:07.282   07:51:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:07.282   07:51:23 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:07.282  ************************************
00:07:07.282  START TEST nvme_single_aen
00:07:07.282  ************************************
00:07:07.282   07:51:23 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:07:07.282  Asynchronous Event Request test
00:07:07.282  Attached to 0000:00:10.0
00:07:07.282  Attached to 0000:00:11.0
00:07:07.282  Reset controller to setup AER completions for this process
00:07:07.282  Registering asynchronous event callbacks...
00:07:07.282  Getting orig temperature thresholds of all controllers
00:07:07.282  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:07.282  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:07.282  Setting all controllers temperature threshold low to trigger AER
00:07:07.282  Waiting for all controllers temperature threshold to be set lower
00:07:07.282  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:07.282  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:07:07.282  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:07.282  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:07:07.282  Waiting for all controllers to trigger AER and reset threshold
00:07:07.282  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:07.282  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:07.282  Cleaning up...
00:07:07.282  ************************************
00:07:07.282  END TEST nvme_single_aen
00:07:07.282  ************************************
00:07:07.282  
00:07:07.282  real	0m0.191s
00:07:07.282  user	0m0.053s
00:07:07.283  sys	0m0.093s
00:07:07.283   07:51:23 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:07.283   07:51:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:07:07.283   07:51:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:07:07.283   07:51:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:07.283   07:51:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:07.283   07:51:23 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:07.542  ************************************
00:07:07.542  START TEST nvme_doorbell_aers
00:07:07.542  ************************************
00:07:07.542   07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:07:07.542   07:51:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:07:07.542   07:51:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:07:07.542   07:51:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:07:07.542    07:51:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:07:07.542    07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:07.542    07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:07:07.542    07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:07.542     07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:07.542     07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:07.542    07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:07:07.542    07:51:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:07:07.542   07:51:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:07.542   07:51:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:07.542  [2024-11-20 07:51:23.560668] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:17.510  Executing: test_write_invalid_db
00:07:17.510  Waiting for AER completion...
00:07:17.510  Failure: test_write_invalid_db
00:07:17.510  
00:07:17.510  Executing: test_invalid_db_write_overflow_sq
00:07:17.510  Waiting for AER completion...
00:07:17.510  Failure: test_invalid_db_write_overflow_sq
00:07:17.510  
00:07:17.510  Executing: test_invalid_db_write_overflow_cq
00:07:17.510  Waiting for AER completion...
00:07:17.510  Failure: test_invalid_db_write_overflow_cq
00:07:17.510  
00:07:17.510   07:51:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:17.510   07:51:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0'
00:07:17.769  [2024-11-20 07:51:33.583941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  Executing: test_write_invalid_db
00:07:27.744  Waiting for AER completion...
00:07:27.744  Failure: test_write_invalid_db
00:07:27.744  
00:07:27.744  Executing: test_invalid_db_write_overflow_sq
00:07:27.744  Waiting for AER completion...
00:07:27.744  Failure: test_invalid_db_write_overflow_sq
00:07:27.744  
00:07:27.744  Executing: test_invalid_db_write_overflow_cq
00:07:27.744  Waiting for AER completion...
00:07:27.744  Failure: test_invalid_db_write_overflow_cq
00:07:27.744  
00:07:27.744  
00:07:27.744  real	0m20.099s
00:07:27.744  user	0m16.249s
00:07:27.744  sys	0m3.702s
00:07:27.744   07:51:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.744   07:51:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:07:27.744  ************************************
00:07:27.744  END TEST nvme_doorbell_aers
00:07:27.744  ************************************
00:07:27.744    07:51:43 nvme -- nvme/nvme.sh@97 -- # uname
00:07:27.744   07:51:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:07:27.744   07:51:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:07:27.744   07:51:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:27.744   07:51:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.744   07:51:43 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:27.744  ************************************
00:07:27.744  START TEST nvme_multi_aen
00:07:27.744  ************************************
00:07:27.744   07:51:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:07:27.744  [2024-11-20 07:51:43.651680] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  [2024-11-20 07:51:43.651790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  [2024-11-20 07:51:43.651817] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  [2024-11-20 07:51:43.653684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  [2024-11-20 07:51:43.653944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  [2024-11-20 07:51:43.653976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61306) is not found. Dropping the request.
00:07:27.744  Child process pid: 61593
00:07:28.004  [Child] Asynchronous Event Request test
00:07:28.004  [Child] Attached to 0000:00:10.0
00:07:28.004  [Child] Attached to 0000:00:11.0
00:07:28.004  [Child] Registering asynchronous event callbacks...
00:07:28.004  [Child] Getting orig temperature thresholds of all controllers
00:07:28.004  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:28.004  [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:28.004  [Child] Waiting for all controllers to trigger AER and reset threshold
00:07:28.004  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:28.004  [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:28.004  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:28.004  [Child] 0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:28.004  [Child] Cleaning up...
00:07:28.004  Asynchronous Event Request test
00:07:28.004  Attached to 0000:00:10.0
00:07:28.004  Attached to 0000:00:11.0
00:07:28.004  Reset controller to setup AER completions for this process
00:07:28.004  Registering asynchronous event callbacks...
00:07:28.004  Getting orig temperature thresholds of all controllers
00:07:28.004  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:28.004  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:28.004  Setting all controllers temperature threshold low to trigger AER
00:07:28.004  Waiting for all controllers temperature threshold to be set lower
00:07:28.004  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:28.004  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:07:28.004  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:28.004  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:07:28.004  Waiting for all controllers to trigger AER and reset threshold
00:07:28.004  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:28.004  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:28.004  Cleaning up...
00:07:28.004  
00:07:28.004  real	0m0.392s
00:07:28.004  user	0m0.105s
00:07:28.004  sys	0m0.193s
00:07:28.004  ************************************
00:07:28.004  END TEST nvme_multi_aen
00:07:28.004  ************************************
00:07:28.004   07:51:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.004   07:51:43 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:07:28.004   07:51:43 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:07:28.004   07:51:43 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:28.004   07:51:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.004   07:51:43 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:28.004  ************************************
00:07:28.004  START TEST nvme_startup
00:07:28.004  ************************************
00:07:28.004   07:51:43 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:07:28.269  Initializing NVMe Controllers
00:07:28.269  Attached to 0000:00:10.0
00:07:28.269  Attached to 0000:00:11.0
00:07:28.269  Initialization complete.
00:07:28.269  Time used:149325.453      (us).
00:07:28.269  
00:07:28.269  real	0m0.186s
00:07:28.269  user	0m0.048s
00:07:28.269  sys	0m0.099s
00:07:28.269  ************************************
00:07:28.269  END TEST nvme_startup
00:07:28.269  ************************************
00:07:28.269   07:51:44 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.269   07:51:44 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:07:28.269   07:51:44 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:07:28.269   07:51:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.269   07:51:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.269   07:51:44 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:28.269  ************************************
00:07:28.269  START TEST nvme_multi_secondary
00:07:28.269  ************************************
00:07:28.269   07:51:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:07:28.269   07:51:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=61646
00:07:28.269   07:51:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:07:28.269   07:51:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=61647
00:07:28.269   07:51:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:07:28.269   07:51:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:07:31.549  Initializing NVMe Controllers
00:07:31.549  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:31.549  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:31.549  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:07:31.549  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:07:31.549  Initialization complete. Launching workers.
00:07:31.549  ========================================================
00:07:31.549                                                                             Latency(us)
00:07:31.549  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:31.549  PCIE (0000:00:10.0) NSID 1 from core  2:    5882.00      22.98    2720.45     286.31   16066.44
00:07:31.549  PCIE (0000:00:11.0) NSID 1 from core  2:    5882.00      22.98    2720.44     267.52   15958.43
00:07:31.549  ========================================================
00:07:31.549  Total                                  :   11764.00      45.95    2720.44     267.52   16066.44
00:07:31.549  
00:07:31.549   07:51:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 61646
00:07:31.808  Initializing NVMe Controllers
00:07:31.808  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:31.808  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:31.808  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:07:31.808  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:07:31.808  Initialization complete. Launching workers.
00:07:31.808  ========================================================
00:07:31.808                                                                             Latency(us)
00:07:31.808  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:31.808  PCIE (0000:00:10.0) NSID 1 from core  1:   13378.32      52.26    1195.81     268.30   11525.25
00:07:31.808  PCIE (0000:00:11.0) NSID 1 from core  1:   13356.99      52.18    1197.76     272.37   11855.53
00:07:31.808  ========================================================
00:07:31.808  Total                                  :   26735.32     104.43    1196.78     268.30   11855.53
00:07:31.808  
00:07:33.708  Initializing NVMe Controllers
00:07:33.708  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:33.708  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:33.708  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:33.708  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:07:33.708  Initialization complete. Launching workers.
00:07:33.708  ========================================================
00:07:33.708                                                                             Latency(us)
00:07:33.708  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:33.708  PCIE (0000:00:10.0) NSID 1 from core  0:   21909.06      85.58     730.11     183.14    6635.61
00:07:33.708  PCIE (0000:00:11.0) NSID 1 from core  0:   21918.86      85.62     729.80     183.49    6834.81
00:07:33.708  ========================================================
00:07:33.708  Total                                  :   43827.93     171.20     729.95     183.14    6834.81
00:07:33.708  
00:07:33.708   07:51:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 61647
00:07:33.708   07:51:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=61712
00:07:33.708   07:51:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:07:33.708   07:51:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=61713
00:07:33.708   07:51:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:07:33.708   07:51:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:07:37.077  Initializing NVMe Controllers
00:07:37.077  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:37.077  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:37.077  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:37.077  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:07:37.077  Initialization complete. Launching workers.
00:07:37.077  ========================================================
00:07:37.077                                                                             Latency(us)
00:07:37.077  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:37.077  PCIE (0000:00:10.0) NSID 1 from core  0:   14140.76      55.24    1131.26     231.12    7009.50
00:07:37.077  PCIE (0000:00:11.0) NSID 1 from core  0:   14140.10      55.23    1131.35     210.60    6779.49
00:07:37.077  ========================================================
00:07:37.077  Total                                  :   28280.86     110.47    1131.31     210.60    7009.50
00:07:37.077  
00:07:37.077  Initializing NVMe Controllers
00:07:37.077  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:37.077  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:37.077  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:07:37.077  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:07:37.077  Initialization complete. Launching workers.
00:07:37.077  ========================================================
00:07:37.077                                                                             Latency(us)
00:07:37.077  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:37.077  PCIE (0000:00:10.0) NSID 1 from core  1:   13426.88      52.45    1191.43     226.40    9243.61
00:07:37.077  PCIE (0000:00:11.0) NSID 1 from core  1:   13416.55      52.41    1192.41     235.45    9336.52
00:07:37.077  ========================================================
00:07:37.077  Total                                  :   26843.43     104.86    1191.92     226.40    9336.52
00:07:37.077  
00:07:38.976  Initializing NVMe Controllers
00:07:38.976  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:38.976  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:38.976  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:07:38.976  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:07:38.976  Initialization complete. Launching workers.
00:07:38.976  ========================================================
00:07:38.976                                                                             Latency(us)
00:07:38.976  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:38.976  PCIE (0000:00:10.0) NSID 1 from core  2:    9521.39      37.19    1679.91     211.61   13642.24
00:07:38.976  PCIE (0000:00:11.0) NSID 1 from core  2:    9530.99      37.23    1678.51     213.48   18052.67
00:07:38.976  ========================================================
00:07:38.976  Total                                  :   19052.37      74.42    1679.21     211.61   18052.67
00:07:38.976  
00:07:38.976  ************************************
00:07:38.976  END TEST nvme_multi_secondary
00:07:38.976  ************************************
00:07:38.976   07:51:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 61712
00:07:38.976   07:51:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 61713
00:07:38.976  
00:07:38.976  real	0m10.675s
00:07:38.976  user	0m18.047s
00:07:38.976  sys	0m0.655s
00:07:38.976   07:51:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:38.976   07:51:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:07:38.976   07:51:54 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:07:38.976   07:51:54 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:07:38.976   07:51:54 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/60941 ]]
00:07:38.976   07:51:54 nvme -- common/autotest_common.sh@1094 -- # kill 60941
00:07:38.977   07:51:54 nvme -- common/autotest_common.sh@1095 -- # wait 60941
00:07:38.977  [2024-11-20 07:51:54.883833] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.883904] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.883923] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.883938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.884712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.884752] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.884770] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977  [2024-11-20 07:51:54.884785] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 61592) is not found. Dropping the request.
00:07:38.977   07:51:54 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:07:38.977   07:51:54 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:07:38.977   07:51:54 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:07:38.977   07:51:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:38.977   07:51:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:38.977   07:51:54 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:38.977  ************************************
00:07:38.977  START TEST bdev_nvme_reset_stuck_adm_cmd
00:07:38.977  ************************************
00:07:38.977   07:51:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:07:39.235  * Looking for test storage...
00:07:39.235  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:39.235     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:39.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.235  		--rc genhtml_branch_coverage=1
00:07:39.235  		--rc genhtml_function_coverage=1
00:07:39.235  		--rc genhtml_legend=1
00:07:39.235  		--rc geninfo_all_blocks=1
00:07:39.235  		--rc geninfo_unexecuted_blocks=1
00:07:39.235  		
00:07:39.235  		'
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:39.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.235  		--rc genhtml_branch_coverage=1
00:07:39.235  		--rc genhtml_function_coverage=1
00:07:39.235  		--rc genhtml_legend=1
00:07:39.235  		--rc geninfo_all_blocks=1
00:07:39.235  		--rc geninfo_unexecuted_blocks=1
00:07:39.235  		
00:07:39.235  		'
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:39.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.235  		--rc genhtml_branch_coverage=1
00:07:39.235  		--rc genhtml_function_coverage=1
00:07:39.235  		--rc genhtml_legend=1
00:07:39.235  		--rc geninfo_all_blocks=1
00:07:39.235  		--rc geninfo_unexecuted_blocks=1
00:07:39.235  		
00:07:39.235  		'
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:39.235  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.235  		--rc genhtml_branch_coverage=1
00:07:39.235  		--rc genhtml_function_coverage=1
00:07:39.235  		--rc genhtml_legend=1
00:07:39.235  		--rc geninfo_all_blocks=1
00:07:39.235  		--rc geninfo_unexecuted_blocks=1
00:07:39.235  		
00:07:39.235  		'
00:07:39.235   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:07:39.235   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:07:39.235   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:07:39.235   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:07:39.235   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:07:39.235    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:07:39.236    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:07:39.236     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:07:39.236     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:39.236     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:07:39.236     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:39.236      07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:39.236      07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:39.236     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:07:39.236     07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:07:39.236    07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=61864
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 61864
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 61864 ']'
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:39.236  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:39.236   07:51:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:39.494  [2024-11-20 07:51:55.308386] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:07:39.494  [2024-11-20 07:51:55.308753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61864 ]
00:07:39.494  [2024-11-20 07:51:55.468078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:39.753  [2024-11-20 07:51:55.566198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:39.753  [2024-11-20 07:51:55.566358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:39.753  [2024-11-20 07:51:55.566502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:39.753  [2024-11-20 07:51:55.566509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:40.320   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:40.320   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:07:40.320   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:07:40.320   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:40.320   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:40.578  nvme0n1
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:40.578    07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_z6iRF.txt
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:40.578  true
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:40.578    07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732089116
00:07:40.578   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=61887
00:07:40.579   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:07:40.579   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:07:40.579   07:51:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:42.481  [2024-11-20 07:51:58.449266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:07:42.481  [2024-11-20 07:51:58.449696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:07:42.481  [2024-11-20 07:51:58.449727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:07:42.481  [2024-11-20 07:51:58.449741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:07:42.481  [2024-11-20 07:51:58.451405] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:07:42.481  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 61887
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 61887
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 61887
00:07:42.481    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:42.481   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:07:42.481    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_z6iRF.txt
00:07:42.740   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:07:42.740    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:07:42.740    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:07:42.741     07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:07:42.741     07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:07:42.741      07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:07:42.741     07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:07:42.741     07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:07:42.741      07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_z6iRF.txt
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 61864
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 61864 ']'
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 61864
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:42.741    07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61864
00:07:42.741  killing process with pid 61864
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61864'
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 61864
00:07:42.741   07:51:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 61864
00:07:43.308   07:51:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:07:43.308   07:51:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:07:43.308  
00:07:43.308  real	0m4.251s
00:07:43.308  user	0m15.147s
00:07:43.308  sys	0m0.789s
00:07:43.308  ************************************
00:07:43.308  END TEST bdev_nvme_reset_stuck_adm_cmd
00:07:43.308  ************************************
00:07:43.308   07:51:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:43.308   07:51:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:43.308   07:51:59 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:07:43.308   07:51:59 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:07:43.308   07:51:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:43.308   07:51:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:43.308   07:51:59 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:43.308  ************************************
00:07:43.308  START TEST nvme_fio
00:07:43.308  ************************************
00:07:43.308   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:07:43.308   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:07:43.308   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:07:43.308    07:51:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:07:43.308    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:43.308    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:07:43.308    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:43.308     07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:43.308     07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:43.567    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:07:43.567    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0')
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:43.567   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:07:43.825   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:07:43.825   07:51:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:07:43.825   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:07:43.825   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:07:43.825   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:07:43.825   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:07:43.826    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:43.826    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:07:43.826    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:07:43.826    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:43.826    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:07:43.826    07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:07:43.826   07:51:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:07:44.084  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:07:44.084  fio-3.35
00:07:44.084  Starting 1 thread
00:07:50.638  
00:07:50.638  test: (groupid=0, jobs=1): err= 0: pid=62015: Wed Nov 20 07:52:06 2024
00:07:50.638    read: IOPS=38.7k, BW=151MiB/s (159MB/s)(303MiB/2001msec)
00:07:50.638      slat (nsec): min=1835, max=155694, avg=2630.81, stdev=1551.53
00:07:50.638      clat (usec): min=392, max=8169, avg=1652.43, stdev=661.18
00:07:50.638       lat (usec): min=396, max=8181, avg=1655.06, stdev=661.34
00:07:50.638      clat percentiles (usec):
00:07:50.638       |  1.00th=[  693],  5.00th=[  848], 10.00th=[  947], 20.00th=[ 1106],
00:07:50.638       | 30.00th=[ 1254], 40.00th=[ 1401], 50.00th=[ 1549], 60.00th=[ 1713],
00:07:50.638       | 70.00th=[ 1893], 80.00th=[ 2089], 90.00th=[ 2409], 95.00th=[ 2769],
00:07:50.638       | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 5866], 99.95th=[ 7177],
00:07:50.638       | 99.99th=[ 8029]
00:07:50.638     bw (  KiB/s): min=150544, max=160792, per=100.00%, avg=155794.67, stdev=5128.69, samples=3
00:07:50.638     iops        : min=37636, max=40198, avg=38948.67, stdev=1282.17, samples=3
00:07:50.638    write: IOPS=38.6k, BW=151MiB/s (158MB/s)(301MiB/2001msec); 0 zone resets
00:07:50.638      slat (nsec): min=1897, max=53144, avg=2772.57, stdev=1368.00
00:07:50.638      clat (usec): min=143, max=8118, avg=1649.93, stdev=654.13
00:07:50.638       lat (usec): min=147, max=8123, avg=1652.70, stdev=654.26
00:07:50.638      clat percentiles (usec):
00:07:50.638       |  1.00th=[  693],  5.00th=[  848], 10.00th=[  938], 20.00th=[ 1106],
00:07:50.638       | 30.00th=[ 1254], 40.00th=[ 1401], 50.00th=[ 1549], 60.00th=[ 1713],
00:07:50.638       | 70.00th=[ 1893], 80.00th=[ 2089], 90.00th=[ 2409], 95.00th=[ 2769],
00:07:50.638       | 99.00th=[ 3982], 99.50th=[ 4293], 99.90th=[ 5735], 99.95th=[ 6128],
00:07:50.638       | 99.99th=[ 7963]
00:07:50.638     bw (  KiB/s): min=149496, max=159968, per=100.00%, avg=155194.67, stdev=5296.97, samples=3
00:07:50.638     iops        : min=37374, max=39992, avg=38798.67, stdev=1324.24, samples=3
00:07:50.638    lat (usec)   : 250=0.01%, 500=0.03%, 750=1.97%, 1000=11.42%
00:07:50.638    lat (msec)   : 2=62.21%, 4=23.38%, 10=0.99%
00:07:50.638    cpu          : usr=98.75%, sys=0.00%, ctx=5, majf=0, minf=0
00:07:50.638    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:07:50.638       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:07:50.638       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:07:50.638       issued rwts: total=77509,77159,0,0 short=0,0,0,0 dropped=0,0,0,0
00:07:50.638       latency   : target=0, window=0, percentile=100.00%, depth=128
00:07:50.638  
00:07:50.638  Run status group 0 (all jobs):
00:07:50.638     READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=303MiB (317MB), run=2001-2001msec
00:07:50.638    WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=301MiB (316MB), run=2001-2001msec
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:07:50.638   07:52:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:07:50.638    07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:50.638    07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:07:50.638    07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:07:50.638    07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:50.638    07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:07:50.638    07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:07:50.638   07:52:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:07:50.896  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:07:50.896  fio-3.35
00:07:50.896  Starting 1 thread
00:07:57.453  
00:07:57.453  test: (groupid=0, jobs=1): err= 0: pid=62075: Wed Nov 20 07:52:13 2024
00:07:57.453    read: IOPS=42.0k, BW=164MiB/s (172MB/s)(328MiB/2001msec)
00:07:57.453      slat (nsec): min=1841, max=94576, avg=2433.46, stdev=1244.11
00:07:57.453      clat (usec): min=419, max=12276, avg=1521.53, stdev=750.73
00:07:57.453       lat (usec): min=421, max=12278, avg=1523.96, stdev=750.94
00:07:57.453      clat percentiles (usec):
00:07:57.453       |  1.00th=[  594],  5.00th=[  734], 10.00th=[  832], 20.00th=[  988],
00:07:57.453       | 30.00th=[ 1123], 40.00th=[ 1254], 50.00th=[ 1385], 60.00th=[ 1516],
00:07:57.453       | 70.00th=[ 1680], 80.00th=[ 1909], 90.00th=[ 2245], 95.00th=[ 2737],
00:07:57.453       | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 6325], 99.95th=[11469],
00:07:57.453       | 99.99th=[12125]
00:07:57.453     bw (  KiB/s): min=143512, max=185584, per=98.59%, avg=165597.33, stdev=21114.37, samples=3
00:07:57.453     iops        : min=35878, max=46396, avg=41399.33, stdev=5278.59, samples=3
00:07:57.453    write: IOPS=41.8k, BW=163MiB/s (171MB/s)(327MiB/2001msec); 0 zone resets
00:07:57.453      slat (nsec): min=1922, max=75844, avg=2558.23, stdev=1162.54
00:07:57.453      clat (usec): min=328, max=12221, avg=1524.18, stdev=757.29
00:07:57.453       lat (usec): min=331, max=12223, avg=1526.74, stdev=757.50
00:07:57.453      clat percentiles (usec):
00:07:57.453       |  1.00th=[  603],  5.00th=[  734], 10.00th=[  832], 20.00th=[  988],
00:07:57.453       | 30.00th=[ 1123], 40.00th=[ 1254], 50.00th=[ 1385], 60.00th=[ 1516],
00:07:57.453       | 70.00th=[ 1680], 80.00th=[ 1909], 90.00th=[ 2245], 95.00th=[ 2769],
00:07:57.453       | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 6521], 99.95th=[11600],
00:07:57.453       | 99.99th=[11994]
00:07:57.453     bw (  KiB/s): min=143880, max=183312, per=98.60%, avg=164960.00, stdev=19857.04, samples=3
00:07:57.453     iops        : min=35970, max=45828, avg=41240.00, stdev=4964.26, samples=3
00:07:57.453    lat (usec)   : 500=0.08%, 750=5.67%, 1000=14.98%
00:07:57.453    lat (msec)   : 2=62.63%, 4=14.89%, 10=1.68%, 20=0.08%
00:07:57.453    cpu          : usr=98.55%, sys=0.00%, ctx=24, majf=0, minf=0
00:07:57.453    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:07:57.453       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:07:57.453       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:07:57.453       issued rwts: total=84022,83691,0,0 short=0,0,0,0 dropped=0,0,0,0
00:07:57.453       latency   : target=0, window=0, percentile=100.00%, depth=128
00:07:57.453  
00:07:57.453  Run status group 0 (all jobs):
00:07:57.454     READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=328MiB (344MB), run=2001-2001msec
00:07:57.454    WRITE: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=327MiB (343MB), run=2001-2001msec
00:07:57.454  ************************************
00:07:57.454  END TEST nvme_fio
00:07:57.454  ************************************
00:07:57.454   07:52:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:07:57.454   07:52:13 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:07:57.454  
00:07:57.454  real	0m13.929s
00:07:57.454  user	0m12.416s
00:07:57.454  sys	0m0.799s
00:07:57.454   07:52:13 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:57.454   07:52:13 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:57.454  ************************************
00:07:57.454  END TEST nvme
00:07:57.454  ************************************
00:07:57.454  
00:07:57.454  real	1m2.174s
00:07:57.454  user	2m30.328s
00:07:57.454  sys	0m9.933s
00:07:57.454   07:52:13 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:57.454   07:52:13 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:57.454   07:52:13  -- spdk/autotest.sh@213 -- # [[ 1 -eq 1 ]]
00:07:57.454   07:52:13  -- spdk/autotest.sh@214 -- # run_test nvme_pmr /home/vagrant/spdk_repo/spdk/test/nvme/nvme_pmr.sh
00:07:57.454   07:52:13  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:57.454   07:52:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:57.454   07:52:13  -- common/autotest_common.sh@10 -- # set +x
00:07:57.454  ************************************
00:07:57.454  START TEST nvme_pmr
00:07:57.454  ************************************
00:07:57.454   07:52:13 nvme_pmr -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_pmr.sh
00:07:57.454  * Looking for test storage...
00:07:57.454  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:57.454    07:52:13 nvme_pmr -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:57.454     07:52:13 nvme_pmr -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:57.454     07:52:13 nvme_pmr -- common/autotest_common.sh@1693 -- # lcov --version
00:07:57.454    07:52:13 nvme_pmr -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@336 -- # IFS=.-:
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@336 -- # read -ra ver1
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@337 -- # IFS=.-:
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@337 -- # read -ra ver2
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@338 -- # local 'op=<'
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@340 -- # ver1_l=2
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@341 -- # ver2_l=1
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@344 -- # case "$op" in
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@345 -- # : 1
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@365 -- # decimal 1
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@353 -- # local d=1
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@355 -- # echo 1
00:07:57.454    07:52:13 nvme_pmr -- scripts/common.sh@365 -- # ver1[v]=1
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@366 -- # decimal 2
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@353 -- # local d=2
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:57.454     07:52:13 nvme_pmr -- scripts/common.sh@355 -- # echo 2
00:07:57.712    07:52:13 nvme_pmr -- scripts/common.sh@366 -- # ver2[v]=2
00:07:57.712    07:52:13 nvme_pmr -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:57.712    07:52:13 nvme_pmr -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:57.712    07:52:13 nvme_pmr -- scripts/common.sh@368 -- # return 0
00:07:57.712    07:52:13 nvme_pmr -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:57.712    07:52:13 nvme_pmr -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:57.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.712  		--rc genhtml_branch_coverage=1
00:07:57.712  		--rc genhtml_function_coverage=1
00:07:57.712  		--rc genhtml_legend=1
00:07:57.712  		--rc geninfo_all_blocks=1
00:07:57.712  		--rc geninfo_unexecuted_blocks=1
00:07:57.712  		
00:07:57.712  		'
00:07:57.712    07:52:13 nvme_pmr -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:57.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.712  		--rc genhtml_branch_coverage=1
00:07:57.712  		--rc genhtml_function_coverage=1
00:07:57.712  		--rc genhtml_legend=1
00:07:57.712  		--rc geninfo_all_blocks=1
00:07:57.712  		--rc geninfo_unexecuted_blocks=1
00:07:57.712  		
00:07:57.712  		'
00:07:57.712    07:52:13 nvme_pmr -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:57.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.712  		--rc genhtml_branch_coverage=1
00:07:57.712  		--rc genhtml_function_coverage=1
00:07:57.712  		--rc genhtml_legend=1
00:07:57.712  		--rc geninfo_all_blocks=1
00:07:57.712  		--rc geninfo_unexecuted_blocks=1
00:07:57.712  		
00:07:57.712  		'
00:07:57.712    07:52:13 nvme_pmr -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:57.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:57.712  		--rc genhtml_branch_coverage=1
00:07:57.712  		--rc genhtml_function_coverage=1
00:07:57.712  		--rc genhtml_legend=1
00:07:57.712  		--rc geninfo_all_blocks=1
00:07:57.712  		--rc geninfo_unexecuted_blocks=1
00:07:57.712  		
00:07:57.712  		'
00:07:57.712    07:52:13 nvme_pmr -- nvme/nvme_pmr.sh@21 -- # uname
00:07:57.712   07:52:13 nvme_pmr -- nvme/nvme_pmr.sh@21 -- # '[' Linux = Linux ']'
00:07:57.712   07:52:13 nvme_pmr -- nvme/nvme_pmr.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:07:57.971  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:07:57.971  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:07:57.971  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:07:57.971   07:52:13 nvme_pmr -- nvme/nvme_pmr.sh@25 -- # run_test nvme_pmr_persistence nvme_pmr_persistence
00:07:57.971   07:52:13 nvme_pmr -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:57.971   07:52:13 nvme_pmr -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:57.971   07:52:13 nvme_pmr -- common/autotest_common.sh@10 -- # set +x
00:07:57.971  ************************************
00:07:57.971  START TEST nvme_pmr_persistence
00:07:57.971  ************************************
00:07:57.971   07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1129 -- # nvme_pmr_persistence
00:07:57.971   07:52:13 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@12 -- # lbas=(4 8 16 32 64 128 256 512 1024 2048 4096)
00:07:57.971    07:52:13 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@14 -- # get_nvme_bdfs
00:07:57.971    07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:57.971    07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1498 -- # local bdfs
00:07:57.971    07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:57.971     07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:57.971     07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:57.971    07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:07:57.971    07:52:13 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:07:57.971   07:52:13 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@14 -- # for bdf in $(get_nvme_bdfs)
00:07:57.971   07:52:13 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:57.971   07:52:13 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 4 -w 4
00:07:58.229  probe_cb - probed 0000:00:10.0!
00:07:58.229  probe_cb - not probed 0000:00:11.0!
00:07:58.229  attach_cb - attached 0000:00:10.0!
00:07:58.229  PMR Data is Persistent across Controller Reset
00:07:58.229   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:58.229   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 8 -w 8
00:07:58.486  probe_cb - probed 0000:00:10.0!
00:07:58.486  probe_cb - not probed 0000:00:11.0!
00:07:58.486  attach_cb - attached 0000:00:10.0!
00:07:58.486  PMR Data is Persistent across Controller Reset
00:07:58.486   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:58.486   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 16 -w 16
00:07:58.743  probe_cb - probed 0000:00:10.0!
00:07:58.743  probe_cb - not probed 0000:00:11.0!
00:07:58.743  attach_cb - attached 0000:00:10.0!
00:07:58.743  PMR Data is Persistent across Controller Reset
00:07:58.743   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:58.743   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 32 -w 32
00:07:58.743  probe_cb - probed 0000:00:10.0!
00:07:58.743  probe_cb - not probed 0000:00:11.0!
00:07:58.743  attach_cb - attached 0000:00:10.0!
00:07:58.743  PMR Data is Persistent across Controller Reset
00:07:58.743   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:58.743   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 64 -w 64
00:07:59.001  probe_cb - probed 0000:00:10.0!
00:07:59.001  probe_cb - not probed 0000:00:11.0!
00:07:59.001  attach_cb - attached 0000:00:10.0!
00:07:59.001  PMR Data is Persistent across Controller Reset
00:07:59.001   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:59.001   07:52:14 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 128 -w 128
00:07:59.260  probe_cb - probed 0000:00:10.0!
00:07:59.260  probe_cb - not probed 0000:00:11.0!
00:07:59.260  attach_cb - attached 0000:00:10.0!
00:07:59.260  PMR Data is Persistent across Controller Reset
00:07:59.260   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:59.260   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 256 -w 256
00:07:59.519  probe_cb - probed 0000:00:10.0!
00:07:59.519  probe_cb - not probed 0000:00:11.0!
00:07:59.519  attach_cb - attached 0000:00:10.0!
00:07:59.519  PMR Data is Persistent across Controller Reset
00:07:59.519   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:59.519   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 512 -w 512
00:07:59.519  probe_cb - probed 0000:00:10.0!
00:07:59.519  probe_cb - not probed 0000:00:11.0!
00:07:59.519  attach_cb - attached 0000:00:10.0!
00:07:59.519  PMR Data is Persistent across Controller Reset
00:07:59.519   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:59.519   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 1024 -w 1024
00:07:59.778  probe_cb - probed 0000:00:10.0!
00:07:59.778  probe_cb - not probed 0000:00:11.0!
00:07:59.778  attach_cb - attached 0000:00:10.0!
00:07:59.778  PMR Data is Persistent across Controller Reset
00:07:59.778   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:07:59.778   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 2048 -w 2048
00:08:00.036  probe_cb - probed 0000:00:10.0!
00:08:00.036  probe_cb - not probed 0000:00:11.0!
00:08:00.036  attach_cb - attached 0000:00:10.0!
00:08:00.036  PMR Data is Persistent across Controller Reset
00:08:00.036   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:00.036   07:52:15 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:10.0 -n 1 -r 0 -l 4096 -w 4096
00:08:00.296  probe_cb - probed 0000:00:10.0!
00:08:00.296  probe_cb - not probed 0000:00:11.0!
00:08:00.296  attach_cb - attached 0000:00:10.0!
00:08:00.296  PMR Data is Persistent across Controller Reset
00:08:00.296   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@14 -- # for bdf in $(get_nvme_bdfs)
00:08:00.296   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:00.296   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 4 -w 4
00:08:00.554  probe_cb - not probed 0000:00:10.0!
00:08:00.554  probe_cb - probed 0000:00:11.0!
00:08:00.554  attach_cb - attached 0000:00:11.0!
00:08:00.554  PMR Data is Persistent across Controller Reset
00:08:00.554   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:00.554   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 8 -w 8
00:08:00.554  probe_cb - not probed 0000:00:10.0!
00:08:00.554  probe_cb - probed 0000:00:11.0!
00:08:00.554  attach_cb - attached 0000:00:11.0!
00:08:00.554  PMR Data is Persistent across Controller Reset
00:08:00.554   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:00.554   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 16 -w 16
00:08:00.812  probe_cb - not probed 0000:00:10.0!
00:08:00.812  probe_cb - probed 0000:00:11.0!
00:08:00.812  attach_cb - attached 0000:00:11.0!
00:08:00.812  PMR Data is Persistent across Controller Reset
00:08:00.812   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:00.812   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 32 -w 32
00:08:01.070  probe_cb - not probed 0000:00:10.0!
00:08:01.070  probe_cb - probed 0000:00:11.0!
00:08:01.070  attach_cb - attached 0000:00:11.0!
00:08:01.070  PMR Data is Persistent across Controller Reset
00:08:01.070   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:01.070   07:52:16 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 64 -w 64
00:08:01.328  probe_cb - not probed 0000:00:10.0!
00:08:01.328  probe_cb - probed 0000:00:11.0!
00:08:01.328  attach_cb - attached 0000:00:11.0!
00:08:01.328  PMR Data is Persistent across Controller Reset
00:08:01.328   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:01.328   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 128 -w 128
00:08:01.328  probe_cb - not probed 0000:00:10.0!
00:08:01.328  probe_cb - probed 0000:00:11.0!
00:08:01.328  attach_cb - attached 0000:00:11.0!
00:08:01.328  PMR Data is Persistent across Controller Reset
00:08:01.328   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:01.328   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 256 -w 256
00:08:01.628  probe_cb - not probed 0000:00:10.0!
00:08:01.628  probe_cb - probed 0000:00:11.0!
00:08:01.628  attach_cb - attached 0000:00:11.0!
00:08:01.628  PMR Data is Persistent across Controller Reset
00:08:01.628   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:01.628   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 512 -w 512
00:08:01.887  probe_cb - not probed 0000:00:10.0!
00:08:01.887  probe_cb - probed 0000:00:11.0!
00:08:01.887  attach_cb - attached 0000:00:11.0!
00:08:01.887  PMR Data is Persistent across Controller Reset
00:08:01.887   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:01.887   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 1024 -w 1024
00:08:02.145  probe_cb - not probed 0000:00:10.0!
00:08:02.145  probe_cb - probed 0000:00:11.0!
00:08:02.145  attach_cb - attached 0000:00:11.0!
00:08:02.145  PMR Data is Persistent across Controller Reset
00:08:02.145   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:02.145   07:52:17 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 2048 -w 2048
00:08:02.145  probe_cb - not probed 0000:00:10.0!
00:08:02.145  probe_cb - probed 0000:00:11.0!
00:08:02.145  attach_cb - attached 0000:00:11.0!
00:08:02.145  PMR Data is Persistent across Controller Reset
00:08:02.145   07:52:18 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@15 -- # for lba in "${lbas[@]}"
00:08:02.146   07:52:18 nvme_pmr.nvme_pmr_persistence -- nvme/nvme_pmr.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/examples/pmr_persistence -p 0000:00:11.0 -n 1 -r 0 -l 4096 -w 4096
00:08:02.404  probe_cb - not probed 0000:00:10.0!
00:08:02.404  probe_cb - probed 0000:00:11.0!
00:08:02.404  attach_cb - attached 0000:00:11.0!
00:08:02.404  PMR Data is Persistent across Controller Reset
00:08:02.404  
00:08:02.404  real	0m4.556s
00:08:02.404  user	0m1.314s
00:08:02.404  sys	0m1.092s
00:08:02.404   07:52:18 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:02.404   07:52:18 nvme_pmr.nvme_pmr_persistence -- common/autotest_common.sh@10 -- # set +x
00:08:02.404  ************************************
00:08:02.404  END TEST nvme_pmr_persistence
00:08:02.404  ************************************
00:08:02.691  ************************************
00:08:02.691  END TEST nvme_pmr
00:08:02.691  ************************************
00:08:02.691  
00:08:02.691  real	0m5.168s
00:08:02.691  user	0m1.578s
00:08:02.691  sys	0m1.468s
00:08:02.691   07:52:18 nvme_pmr -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:02.691   07:52:18 nvme_pmr -- common/autotest_common.sh@10 -- # set +x
00:08:02.691   07:52:18  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:08:02.692   07:52:18  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:02.692   07:52:18  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:02.692   07:52:18  -- common/autotest_common.sh@10 -- # set +x
00:08:02.692  ************************************
00:08:02.692  START TEST nvme_scc
00:08:02.692  ************************************
00:08:02.692   07:52:18 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:08:02.692  * Looking for test storage...
00:08:02.692  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:02.692     07:52:18 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:02.692      07:52:18 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version
00:08:02.692      07:52:18 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:02.692     07:52:18 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@345 -- # : 1
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:02.692      07:52:18 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:08:02.692      07:52:18 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:08:02.692      07:52:18 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.692      07:52:18 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:08:02.692     07:52:18 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:02.692      07:52:18 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:08:02.952      07:52:18 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:08:02.952      07:52:18 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:02.952      07:52:18 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:08:02.952     07:52:18 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:02.952     07:52:18 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:02.952     07:52:18 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:02.952     07:52:18 nvme_scc -- scripts/common.sh@368 -- # return 0
00:08:02.952     07:52:18 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:02.952     07:52:18 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:02.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.952  		--rc genhtml_branch_coverage=1
00:08:02.952  		--rc genhtml_function_coverage=1
00:08:02.952  		--rc genhtml_legend=1
00:08:02.952  		--rc geninfo_all_blocks=1
00:08:02.952  		--rc geninfo_unexecuted_blocks=1
00:08:02.952  		
00:08:02.952  		'
00:08:02.952     07:52:18 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:02.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.952  		--rc genhtml_branch_coverage=1
00:08:02.952  		--rc genhtml_function_coverage=1
00:08:02.952  		--rc genhtml_legend=1
00:08:02.952  		--rc geninfo_all_blocks=1
00:08:02.952  		--rc geninfo_unexecuted_blocks=1
00:08:02.952  		
00:08:02.952  		'
00:08:02.952     07:52:18 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:02.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.952  		--rc genhtml_branch_coverage=1
00:08:02.952  		--rc genhtml_function_coverage=1
00:08:02.952  		--rc genhtml_legend=1
00:08:02.952  		--rc geninfo_all_blocks=1
00:08:02.952  		--rc geninfo_unexecuted_blocks=1
00:08:02.952  		
00:08:02.952  		'
00:08:02.952     07:52:18 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:02.952  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.952  		--rc genhtml_branch_coverage=1
00:08:02.952  		--rc genhtml_function_coverage=1
00:08:02.952  		--rc genhtml_legend=1
00:08:02.952  		--rc geninfo_all_blocks=1
00:08:02.952  		--rc geninfo_unexecuted_blocks=1
00:08:02.952  		
00:08:02.952  		'
00:08:02.952    07:52:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:08:02.952       07:52:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:08:02.952      07:52:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:08:02.952     07:52:18 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:08:02.952     07:52:18 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:02.952      07:52:18 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:08:02.953      07:52:18 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:02.953      07:52:18 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:02.953      07:52:18 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:02.953       07:52:18 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.953       07:52:18 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.953       07:52:18 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.953       07:52:18 nvme_scc -- paths/export.sh@5 -- # export PATH
00:08:02.953       07:52:18 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:08:02.953     07:52:18 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:08:02.953    07:52:18 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:02.953    07:52:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:08:02.953   07:52:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:08:02.953   07:52:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:08:02.953   07:52:18 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:08:03.212  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:03.212  Waiting for block devices as requested
00:08:03.212  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:08:03.474  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:08:03.474   07:52:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:08:03.474   07:52:19 nvme_scc -- scripts/common.sh@18 -- # local i
00:08:03.474   07:52:19 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:08:03.474   07:52:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:03.474   07:52:19 nvme_scc -- scripts/common.sh@27 -- # return 0
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:08:03.474    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:08:03.474   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:08:03.475    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.475   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:08:03.476    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:08:03.476   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:08:03.477    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:08:03.477   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:08:03.478    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:08:03.478   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:08:03.479    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.479   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:08:03.741   07:52:19 nvme_scc -- scripts/common.sh@18 -- # local i
00:08:03.741   07:52:19 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:08:03.741   07:52:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:03.741   07:52:19 nvme_scc -- scripts/common.sh@27 -- # return 0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:08:03.741    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.741   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:08:03.742    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.742   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:08:03.743    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.743   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x140000"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x140000
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x140000"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x140000
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x140000"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x140000
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:08:03.744    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.744   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:08:03.745    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.745   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:08:03.746   07:52:19 nvme_scc -- nvme/functions.sh@65 -- # (( 2 > 0 ))
00:08:03.746    07:52:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@192 -- # (( 2 == 0 ))
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:08:03.746      07:52:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:08:03.746     07:52:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@207 -- # (( 2 > 0 ))
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1
00:08:03.746    07:52:19 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:08:03.746   07:52:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1
00:08:03.746   07:52:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:08:03.746   07:52:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:08:04.314  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:04.572  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:08:04.572  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:08:04.572   07:52:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:08:04.572   07:52:20 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:08:04.572   07:52:20 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:04.572   07:52:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:08:04.572  ************************************
00:08:04.572  START TEST nvme_simple_copy
00:08:04.572  ************************************
00:08:04.572   07:52:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:08:04.831  Initializing NVMe Controllers
00:08:04.831  Attaching to 0000:00:10.0
00:08:04.831  Controller supports SCC. Attached to 0000:00:10.0
00:08:04.831    Namespace ID: 1 size: 5GB
00:08:04.831  Initialization complete.
00:08:04.831  
00:08:04.831  Controller QEMU NVMe Ctrl       (12340               )
00:08:04.831  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:08:04.831  Namespace Block Size:4096
00:08:04.831  Writing LBAs 0 to 63 with Random Data
00:08:04.831  Copied LBAs from 0 - 63 to the Destination LBA 256
00:08:04.831  LBAs matching Written Data: 64
00:08:04.831  ************************************
00:08:04.831  END TEST nvme_simple_copy
00:08:04.831  ************************************
00:08:04.831  
00:08:04.831  real	0m0.214s
00:08:04.831  user	0m0.067s
00:08:04.831  sys	0m0.047s
00:08:04.831   07:52:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:04.831   07:52:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:08:04.831  ************************************
00:08:04.831  END TEST nvme_scc
00:08:04.831  ************************************
00:08:04.831  
00:08:04.831  real	0m2.235s
00:08:04.831  user	0m0.924s
00:08:04.831  sys	0m1.093s
00:08:04.831   07:52:20 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:04.831   07:52:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:08:04.831   07:52:20  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:08:04.831   07:52:20  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:08:04.831   07:52:20  -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]]
00:08:04.831   07:52:20  -- spdk/autotest.sh@226 -- # run_test nvme_cmb /home/vagrant/spdk_repo/spdk/test/nvme/cmb/cmb.sh
00:08:04.831   07:52:20  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:04.831   07:52:20  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:04.831   07:52:20  -- common/autotest_common.sh@10 -- # set +x
00:08:04.831  ************************************
00:08:04.831  START TEST nvme_cmb
00:08:04.831  ************************************
00:08:04.831   07:52:20 nvme_cmb -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/cmb/cmb.sh
00:08:04.831  * Looking for test storage...
00:08:04.831  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/cmb
00:08:04.831    07:52:20 nvme_cmb -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:05.090     07:52:20 nvme_cmb -- common/autotest_common.sh@1693 -- # lcov --version
00:08:05.090     07:52:20 nvme_cmb -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:05.090    07:52:20 nvme_cmb -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@336 -- # IFS=.-:
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@336 -- # read -ra ver1
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@337 -- # IFS=.-:
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@337 -- # read -ra ver2
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@338 -- # local 'op=<'
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@340 -- # ver1_l=2
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@341 -- # ver2_l=1
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@344 -- # case "$op" in
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@345 -- # : 1
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@365 -- # decimal 1
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@353 -- # local d=1
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@355 -- # echo 1
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@365 -- # ver1[v]=1
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@366 -- # decimal 2
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@353 -- # local d=2
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:05.090     07:52:20 nvme_cmb -- scripts/common.sh@355 -- # echo 2
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@366 -- # ver2[v]=2
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:05.090    07:52:20 nvme_cmb -- scripts/common.sh@368 -- # return 0
00:08:05.090    07:52:20 nvme_cmb -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:05.090    07:52:20 nvme_cmb -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:05.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.090  		--rc genhtml_branch_coverage=1
00:08:05.090  		--rc genhtml_function_coverage=1
00:08:05.090  		--rc genhtml_legend=1
00:08:05.090  		--rc geninfo_all_blocks=1
00:08:05.090  		--rc geninfo_unexecuted_blocks=1
00:08:05.090  		
00:08:05.090  		'
00:08:05.090    07:52:20 nvme_cmb -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:05.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.090  		--rc genhtml_branch_coverage=1
00:08:05.090  		--rc genhtml_function_coverage=1
00:08:05.090  		--rc genhtml_legend=1
00:08:05.090  		--rc geninfo_all_blocks=1
00:08:05.090  		--rc geninfo_unexecuted_blocks=1
00:08:05.090  		
00:08:05.090  		'
00:08:05.090    07:52:20 nvme_cmb -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:05.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.090  		--rc genhtml_branch_coverage=1
00:08:05.090  		--rc genhtml_function_coverage=1
00:08:05.090  		--rc genhtml_legend=1
00:08:05.090  		--rc geninfo_all_blocks=1
00:08:05.090  		--rc geninfo_unexecuted_blocks=1
00:08:05.090  		
00:08:05.090  		'
00:08:05.090    07:52:20 nvme_cmb -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:05.090  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.090  		--rc genhtml_branch_coverage=1
00:08:05.090  		--rc genhtml_function_coverage=1
00:08:05.090  		--rc genhtml_legend=1
00:08:05.090  		--rc geninfo_all_blocks=1
00:08:05.090  		--rc geninfo_unexecuted_blocks=1
00:08:05.090  		
00:08:05.090  		'
00:08:05.090   07:52:20 nvme_cmb -- cmb/cmb.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:08:05.349  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:05.349  Waiting for block devices as requested
00:08:05.349  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:08:05.608  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:08:05.608   07:52:21 nvme_cmb -- cmb/cmb.sh@72 -- # xtrace_disable
00:08:05.608   07:52:21 nvme_cmb -- common/autotest_common.sh@10 -- # set +x
00:08:05.912  * nvme0 (0000:00:11.0:QEMU NVMe Ctrl                          :12341               :pcie) CMB:
00:08:05.912    SZ:    128 MiB
00:08:05.912    SZU:   1 MiB
00:08:05.912    WDS:   set
00:08:05.912    RDS:   set
00:08:05.912    LISTS: set
00:08:05.912    CQS:   not set
00:08:05.912    SQS:   set
00:08:05.912  
00:08:05.912    OFST:    0x0
00:08:05.912    CQDA:    not set
00:08:05.912    CDMMMS:  not set
00:08:05.912    CDPCILS: set
00:08:05.912    CDPMLS:  set
00:08:05.912    CQPDS:   not set
00:08:05.912    CQMMS:   not set
00:08:05.912    BIR:     0x2
00:08:05.912  
00:08:05.912  * nvme1 (0000:00:10.0:QEMU NVMe Ctrl                          :12340               :pcie) CMB:
00:08:05.912    SZ:    128 MiB
00:08:05.912    SZU:   1 MiB
00:08:05.912    WDS:   set
00:08:05.912    RDS:   set
00:08:05.912    LISTS: set
00:08:05.912    CQS:   not set
00:08:05.912    SQS:   set
00:08:05.912  
00:08:05.912    OFST:    0x0
00:08:05.912    CQDA:    not set
00:08:05.912    CDMMMS:  not set
00:08:05.912    CDPCILS: set
00:08:05.912    CDPMLS:  set
00:08:05.912    CQPDS:   not set
00:08:05.912    CQMMS:   not set
00:08:05.912    BIR:     0x2
00:08:05.912  
00:08:05.912   07:52:21 nvme_cmb -- cmb/cmb.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:08:06.170  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:06.428  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:08:06.428  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:08:06.428   07:52:22 nvme_cmb -- cmb/cmb.sh@112 -- # for nvme in "${!cmb_nvmes[@]}"
00:08:06.428   07:52:22 nvme_cmb -- cmb/cmb.sh@113 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]]
00:08:06.428   07:52:22 nvme_cmb -- cmb/cmb.sh@112 -- # for nvme in "${!cmb_nvmes[@]}"
00:08:06.428   07:52:22 nvme_cmb -- cmb/cmb.sh@113 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:08:06.428   07:52:22 nvme_cmb -- cmb/cmb.sh@119 -- # (( 2 == 0 ))
00:08:06.428   07:52:22 nvme_cmb -- cmb/cmb.sh@124 -- # run_test cmb_copy /home/vagrant/spdk_repo/spdk/test/nvme/cmb/cmb_copy.sh 0000:00:11.0 0000:00:10.0
00:08:06.428   07:52:22 nvme_cmb -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:08:06.428   07:52:22 nvme_cmb -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:06.428   07:52:22 nvme_cmb -- common/autotest_common.sh@10 -- # set +x
00:08:06.428  ************************************
00:08:06.428  START TEST cmb_copy
00:08:06.428  ************************************
00:08:06.428   07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/cmb/cmb_copy.sh 0000:00:11.0 0000:00:10.0
00:08:06.687  * Looking for test storage...
00:08:06.687  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/cmb
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1693 -- # lcov --version
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@336 -- # IFS=.-:
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@336 -- # read -ra ver1
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@337 -- # IFS=.-:
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@337 -- # read -ra ver2
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@338 -- # local 'op=<'
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@340 -- # ver1_l=2
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@341 -- # ver2_l=1
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@344 -- # case "$op" in
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@345 -- # : 1
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@365 -- # decimal 1
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@353 -- # local d=1
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@355 -- # echo 1
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@365 -- # ver1[v]=1
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@366 -- # decimal 2
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@353 -- # local d=2
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:06.687     07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@355 -- # echo 2
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@366 -- # ver2[v]=2
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- scripts/common.sh@368 -- # return 0
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:06.687  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.687  		--rc genhtml_branch_coverage=1
00:08:06.687  		--rc genhtml_function_coverage=1
00:08:06.687  		--rc genhtml_legend=1
00:08:06.687  		--rc geninfo_all_blocks=1
00:08:06.687  		--rc geninfo_unexecuted_blocks=1
00:08:06.687  		
00:08:06.687  		'
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:06.687  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.687  		--rc genhtml_branch_coverage=1
00:08:06.687  		--rc genhtml_function_coverage=1
00:08:06.687  		--rc genhtml_legend=1
00:08:06.687  		--rc geninfo_all_blocks=1
00:08:06.687  		--rc geninfo_unexecuted_blocks=1
00:08:06.687  		
00:08:06.687  		'
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:06.687  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.687  		--rc genhtml_branch_coverage=1
00:08:06.687  		--rc genhtml_function_coverage=1
00:08:06.687  		--rc genhtml_legend=1
00:08:06.687  		--rc geninfo_all_blocks=1
00:08:06.687  		--rc geninfo_unexecuted_blocks=1
00:08:06.687  		
00:08:06.687  		'
00:08:06.687    07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:06.687  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:06.687  		--rc genhtml_branch_coverage=1
00:08:06.687  		--rc genhtml_function_coverage=1
00:08:06.687  		--rc genhtml_legend=1
00:08:06.687  		--rc geninfo_all_blocks=1
00:08:06.687  		--rc geninfo_unexecuted_blocks=1
00:08:06.687  		
00:08:06.687  		'
00:08:06.687   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@11 -- # nvmes=("$@")
00:08:06.687   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@11 -- # all_nvmes=2
00:08:06.688   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@14 -- # (( all_nvmes >= 2 ))
00:08:06.688   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@17 -- # (( --all_nvmes >= 0 ))
00:08:06.688   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@18 -- # read_nvme=0000:00:10.0
00:08:06.688   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@19 -- # for nvme_idx in "${!nvmes[@]}"
00:08:06.688   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@20 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:08:06.688   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/cmb_copy -r 0000:00:10.0-1-0-1 -w 0000:00:11.0-1-0-1 -c 0000:00:10.0
00:08:06.946  probe_cb - probed 0000:00:10.0!
00:08:06.946  probe_cb - probed 0000:00:11.0!
00:08:06.946  attach_cb - attached 0000:00:10.0!
00:08:06.946  attach_cb - attached 0000:00:11.0!
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@19 -- # for nvme_idx in "${!nvmes[@]}"
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@20 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@20 -- # continue
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@17 -- # (( --all_nvmes >= 0 ))
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@18 -- # read_nvme=0000:00:11.0
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@19 -- # for nvme_idx in "${!nvmes[@]}"
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@20 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@20 -- # continue
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@19 -- # for nvme_idx in "${!nvmes[@]}"
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@20 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/cmb_copy -r 0000:00:11.0-1-0-1 -w 0000:00:10.0-1-0-1 -c 0000:00:11.0
00:08:06.946  probe_cb - probed 0000:00:10.0!
00:08:06.946  probe_cb - probed 0000:00:11.0!
00:08:06.946  attach_cb - attached 0000:00:10.0!
00:08:06.946  attach_cb - attached 0000:00:11.0!
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- cmb/cmb_copy.sh@17 -- # (( --all_nvmes >= 0 ))
00:08:06.946  
00:08:06.946  real	0m0.536s
00:08:06.946  user	0m0.186s
00:08:06.946  sys	0m0.156s
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:06.946   07:52:22 nvme_cmb.cmb_copy -- common/autotest_common.sh@10 -- # set +x
00:08:06.946  ************************************
00:08:06.946  END TEST cmb_copy
00:08:06.946  ************************************
00:08:07.205  
00:08:07.205  real	0m2.227s
00:08:07.205  user	0m0.810s
00:08:07.205  sys	0m1.215s
00:08:07.205   07:52:23 nvme_cmb -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:07.205   07:52:23 nvme_cmb -- common/autotest_common.sh@10 -- # set +x
00:08:07.205  ************************************
00:08:07.205  END TEST nvme_cmb
00:08:07.205  ************************************
00:08:07.205   07:52:23  -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]]
00:08:07.205   07:52:23  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:08:07.205   07:52:23  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:08:07.205   07:52:23  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:07.205   07:52:23  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:07.205   07:52:23  -- common/autotest_common.sh@10 -- # set +x
00:08:07.205  ************************************
00:08:07.205  START TEST nvme_rpc
00:08:07.205  ************************************
00:08:07.205   07:52:23 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:08:07.205  * Looking for test storage...
00:08:07.205  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:07.205    07:52:23 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:07.205     07:52:23 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:08:07.205     07:52:23 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:07.463     07:52:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:07.463    07:52:23 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:07.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:07.463  		--rc genhtml_branch_coverage=1
00:08:07.463  		--rc genhtml_function_coverage=1
00:08:07.463  		--rc genhtml_legend=1
00:08:07.463  		--rc geninfo_all_blocks=1
00:08:07.463  		--rc geninfo_unexecuted_blocks=1
00:08:07.463  		
00:08:07.463  		'
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:07.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:07.463  		--rc genhtml_branch_coverage=1
00:08:07.463  		--rc genhtml_function_coverage=1
00:08:07.463  		--rc genhtml_legend=1
00:08:07.463  		--rc geninfo_all_blocks=1
00:08:07.463  		--rc geninfo_unexecuted_blocks=1
00:08:07.463  		
00:08:07.463  		'
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:07.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:07.463  		--rc genhtml_branch_coverage=1
00:08:07.463  		--rc genhtml_function_coverage=1
00:08:07.463  		--rc genhtml_legend=1
00:08:07.463  		--rc geninfo_all_blocks=1
00:08:07.463  		--rc geninfo_unexecuted_blocks=1
00:08:07.463  		
00:08:07.463  		'
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:07.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:07.463  		--rc genhtml_branch_coverage=1
00:08:07.463  		--rc genhtml_function_coverage=1
00:08:07.463  		--rc genhtml_legend=1
00:08:07.463  		--rc geninfo_all_blocks=1
00:08:07.463  		--rc geninfo_unexecuted_blocks=1
00:08:07.463  		
00:08:07.463  		'
00:08:07.463   07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:07.463    07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:08:07.463    07:52:23 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:08:07.463     07:52:23 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:08:07.464     07:52:23 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:07.464     07:52:23 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:08:07.464     07:52:23 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:07.464      07:52:23 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:08:07.464      07:52:23 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:07.464     07:52:23 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 2 == 0 ))
00:08:07.464     07:52:23 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:08:07.464    07:52:23 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:08:07.464   07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:08:07.464   07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=63342
00:08:07.464   07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:08:07.464   07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:08:07.464   07:52:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 63342
00:08:07.464   07:52:23 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 63342 ']'
00:08:07.464   07:52:23 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:07.464   07:52:23 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:07.464  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:07.464   07:52:23 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:07.464   07:52:23 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:07.464   07:52:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:07.464  [2024-11-20 07:52:23.438355] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:08:07.464  [2024-11-20 07:52:23.439418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63342 ]
00:08:07.722  [2024-11-20 07:52:23.574412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:07.722  [2024-11-20 07:52:23.663689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:07.722  [2024-11-20 07:52:23.663704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:08.655   07:52:24 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:08.655   07:52:24 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:08.655   07:52:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:08:09.221  Nvme0n1
00:08:09.221   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:08:09.221   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:08:09.478  request:
00:08:09.478  {
00:08:09.478    "bdev_name": "Nvme0n1",
00:08:09.478    "filename": "non_existing_file",
00:08:09.478    "method": "bdev_nvme_apply_firmware",
00:08:09.478    "req_id": 1
00:08:09.478  }
00:08:09.478  Got JSON-RPC error response
00:08:09.478  response:
00:08:09.478  {
00:08:09.478    "code": -32603,
00:08:09.478    "message": "open file failed."
00:08:09.478  }
00:08:09.478   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:08:09.478   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:08:09.478   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:08:10.042   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:08:10.042   07:52:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 63342
00:08:10.042   07:52:25 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 63342 ']'
00:08:10.042   07:52:25 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 63342
00:08:10.042    07:52:25 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:08:10.042   07:52:25 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:10.042    07:52:25 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63342
00:08:10.042   07:52:26 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:10.042  killing process with pid 63342
00:08:10.042   07:52:26 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:10.042   07:52:26 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63342'
00:08:10.042   07:52:26 nvme_rpc -- common/autotest_common.sh@973 -- # kill 63342
00:08:10.042   07:52:26 nvme_rpc -- common/autotest_common.sh@978 -- # wait 63342
00:08:10.626  ************************************
00:08:10.626  END TEST nvme_rpc
00:08:10.626  ************************************
00:08:10.626  
00:08:10.626  real	0m3.347s
00:08:10.626  user	0m7.272s
00:08:10.626  sys	0m0.684s
00:08:10.626   07:52:26 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:10.626   07:52:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:10.626   07:52:26  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:08:10.626   07:52:26  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:10.626   07:52:26  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:10.626   07:52:26  -- common/autotest_common.sh@10 -- # set +x
00:08:10.626  ************************************
00:08:10.626  START TEST nvme_rpc_timeouts
00:08:10.626  ************************************
00:08:10.626   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:08:10.626  * Looking for test storage...
00:08:10.626  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:10.626    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:10.626     07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version
00:08:10.626     07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:10.626    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:10.626     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:08:10.626     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:08:10.626     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.626     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:08:10.626    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:08:10.884     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:08:10.884     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:08:10.884     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:10.884     07:52:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:08:10.884    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:08:10.884    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:10.884    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:10.884    07:52:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:08:10.884    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:10.884    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:10.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.884  		--rc genhtml_branch_coverage=1
00:08:10.884  		--rc genhtml_function_coverage=1
00:08:10.884  		--rc genhtml_legend=1
00:08:10.884  		--rc geninfo_all_blocks=1
00:08:10.884  		--rc geninfo_unexecuted_blocks=1
00:08:10.884  		
00:08:10.884  		'
00:08:10.884    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:10.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.884  		--rc genhtml_branch_coverage=1
00:08:10.884  		--rc genhtml_function_coverage=1
00:08:10.884  		--rc genhtml_legend=1
00:08:10.884  		--rc geninfo_all_blocks=1
00:08:10.884  		--rc geninfo_unexecuted_blocks=1
00:08:10.884  		
00:08:10.884  		'
00:08:10.884    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:10.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.884  		--rc genhtml_branch_coverage=1
00:08:10.884  		--rc genhtml_function_coverage=1
00:08:10.884  		--rc genhtml_legend=1
00:08:10.884  		--rc geninfo_all_blocks=1
00:08:10.884  		--rc geninfo_unexecuted_blocks=1
00:08:10.884  		
00:08:10.884  		'
00:08:10.884    07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:10.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.884  		--rc genhtml_branch_coverage=1
00:08:10.884  		--rc genhtml_function_coverage=1
00:08:10.884  		--rc genhtml_legend=1
00:08:10.884  		--rc geninfo_all_blocks=1
00:08:10.884  		--rc geninfo_unexecuted_blocks=1
00:08:10.884  		
00:08:10.884  		'
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_63412
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_63412
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=63444
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:08:10.884   07:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 63444
00:08:10.884   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 63444 ']'
00:08:10.884   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:10.884   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:10.884  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:10.884   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:10.884   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:10.884   07:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:08:10.884  [2024-11-20 07:52:26.735722] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:08:10.884  [2024-11-20 07:52:26.736765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ]
00:08:10.884  [2024-11-20 07:52:26.871734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:11.141  [2024-11-20 07:52:26.959763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:11.141  [2024-11-20 07:52:26.959782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:11.400   07:52:27 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:11.400   07:52:27 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:08:11.400  Checking default timeout settings:
00:08:11.400   07:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:08:11.400   07:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:08:11.967  Making settings changes with rpc:
00:08:11.967   07:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:08:11.967   07:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:08:12.225  Check default vs. modified settings:
00:08:12.225   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:08:12.225   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_63412
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_63412
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:08:12.483  Setting action_on_timeout is changed as expected.
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_63412
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_63412
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:08:12.483  Setting timeout_us is changed as expected.
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_63412
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_63412
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:08:12.483  Setting timeout_admin_us is changed as expected.
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_63412 /tmp/settings_modified_63412
00:08:12.483   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 63444
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 63444 ']'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 63444
00:08:12.483    07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:12.483    07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63444
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:12.483  killing process with pid 63444
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63444'
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 63444
00:08:12.483   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 63444
00:08:13.050  RPC TIMEOUT SETTING TEST PASSED.
00:08:13.050   07:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:08:13.050  
00:08:13.050  real	0m2.442s
00:08:13.050  user	0m4.965s
00:08:13.050  sys	0m0.594s
00:08:13.050   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:13.050   07:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:08:13.050  ************************************
00:08:13.050  END TEST nvme_rpc_timeouts
00:08:13.050  ************************************
00:08:13.050    07:52:28  -- spdk/autotest.sh@239 -- # uname -s
00:08:13.050   07:52:28  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:08:13.050   07:52:28  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:08:13.050   07:52:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:13.050   07:52:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:13.050   07:52:28  -- common/autotest_common.sh@10 -- # set +x
00:08:13.050  ************************************
00:08:13.050  START TEST sw_hotplug
00:08:13.050  ************************************
00:08:13.050   07:52:28 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:08:13.050  * Looking for test storage...
00:08:13.050  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:13.050    07:52:29 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:13.050     07:52:29 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version
00:08:13.050     07:52:29 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:13.311    07:52:29 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:13.311     07:52:29 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:13.311    07:52:29 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:08:13.311    07:52:29 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:13.311    07:52:29 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:13.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:13.311  		--rc genhtml_branch_coverage=1
00:08:13.311  		--rc genhtml_function_coverage=1
00:08:13.311  		--rc genhtml_legend=1
00:08:13.311  		--rc geninfo_all_blocks=1
00:08:13.311  		--rc geninfo_unexecuted_blocks=1
00:08:13.311  		
00:08:13.311  		'
00:08:13.311    07:52:29 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:13.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:13.311  		--rc genhtml_branch_coverage=1
00:08:13.311  		--rc genhtml_function_coverage=1
00:08:13.311  		--rc genhtml_legend=1
00:08:13.311  		--rc geninfo_all_blocks=1
00:08:13.311  		--rc geninfo_unexecuted_blocks=1
00:08:13.311  		
00:08:13.311  		'
00:08:13.311    07:52:29 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:13.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:13.311  		--rc genhtml_branch_coverage=1
00:08:13.311  		--rc genhtml_function_coverage=1
00:08:13.311  		--rc genhtml_legend=1
00:08:13.311  		--rc geninfo_all_blocks=1
00:08:13.311  		--rc geninfo_unexecuted_blocks=1
00:08:13.311  		
00:08:13.311  		'
00:08:13.311    07:52:29 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:13.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:13.311  		--rc genhtml_branch_coverage=1
00:08:13.311  		--rc genhtml_function_coverage=1
00:08:13.311  		--rc genhtml_legend=1
00:08:13.311  		--rc geninfo_all_blocks=1
00:08:13.311  		--rc geninfo_unexecuted_blocks=1
00:08:13.311  		
00:08:13.311  		'
00:08:13.311   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:08:13.570  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:13.570  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:08:13.571  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:08:13.571   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:08:13.571   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:08:13.571   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:08:13.571    07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@233 -- # local class
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:08:13.571       07:52:29 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:08:13.571       07:52:29 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:08:13.571       07:52:29 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:08:13.571      07:52:29 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@18 -- # local i
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@18 -- # local i
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]]
00:08:13.571     07:52:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@328 -- # (( 2 ))
00:08:13.571    07:52:29 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:08:13.571   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2
00:08:13.571   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:08:13.571   07:52:29 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:08:13.830  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:13.830  Waiting for block devices as requested
00:08:13.830  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:08:14.089  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:08:14.089   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0'
00:08:14.089   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:08:14.661  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:08:14.661  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:14.661  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:08:14.661  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:08:14.919   07:52:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=63995
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning
00:08:14.919   07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:08:14.919    07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:08:14.919    07:52:30 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:08:14.919    07:52:30 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:08:14.919    07:52:30 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:08:14.920    07:52:30 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:08:14.920     07:52:30 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:08:14.920     07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:08:14.920     07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:08:14.920     07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:08:14.920     07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:08:14.920     07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:08:14.920  Initializing NVMe Controllers
00:08:14.920  Attaching to 0000:00:10.0
00:08:14.920  Attaching to 0000:00:11.0
00:08:15.178  Attached to 0000:00:10.0
00:08:15.178  Attached to 0000:00:11.0
00:08:15.178  Initialization complete. Starting I/O...
00:08:15.178  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:08:15.178  QEMU NVMe Ctrl       (12341               ):          0 I/Os completed (+0)
00:08:15.178  
00:08:16.114  QEMU NVMe Ctrl       (12340               ):       2744 I/Os completed (+2744)
00:08:16.114  QEMU NVMe Ctrl       (12341               ):       2749 I/Os completed (+2749)
00:08:16.114  
00:08:17.048  QEMU NVMe Ctrl       (12340               ):       5792 I/Os completed (+3048)
00:08:17.048  QEMU NVMe Ctrl       (12341               ):       5800 I/Os completed (+3051)
00:08:17.048  
00:08:17.987  QEMU NVMe Ctrl       (12340               ):       8732 I/Os completed (+2940)
00:08:17.987  QEMU NVMe Ctrl       (12341               ):       8777 I/Os completed (+2977)
00:08:17.987  
00:08:19.362  QEMU NVMe Ctrl       (12340               ):      12021 I/Os completed (+3289)
00:08:19.362  QEMU NVMe Ctrl       (12341               ):      12066 I/Os completed (+3289)
00:08:19.362  
00:08:19.929  QEMU NVMe Ctrl       (12340               ):      14694 I/Os completed (+2673)
00:08:19.929  QEMU NVMe Ctrl       (12341               ):      14746 I/Os completed (+2680)
00:08:19.929  
00:08:20.864     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:08:20.864     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:08:20.864     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:08:20.864  [2024-11-20 07:52:36.810354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:08:20.864  Controller removed: QEMU NVMe Ctrl       (12340               )
00:08:20.864  [2024-11-20 07:52:36.812184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  [2024-11-20 07:52:36.812244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  [2024-11-20 07:52:36.812259] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  [2024-11-20 07:52:36.812270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:08:20.864  [2024-11-20 07:52:36.813548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  [2024-11-20 07:52:36.813570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  [2024-11-20 07:52:36.813581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  [2024-11-20 07:52:36.813591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.864  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor
00:08:20.864  EAL: Scan for (pci) bus failed.
00:08:20.864     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:08:20.864     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:08:20.865  [2024-11-20 07:52:36.844160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:08:20.865  Controller removed: QEMU NVMe Ctrl       (12341               )
00:08:20.865  [2024-11-20 07:52:36.845277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  [2024-11-20 07:52:36.845317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  [2024-11-20 07:52:36.845331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  [2024-11-20 07:52:36.845342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:08:20.865  [2024-11-20 07:52:36.846430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  [2024-11-20 07:52:36.846458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  [2024-11-20 07:52:36.846470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  [2024-11-20 07:52:36.846480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:20.865  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:08:20.865  EAL: Scan for (pci) bus failed.
00:08:20.865     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:08:20.865     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:08:21.123  
00:08:21.123     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:08:21.123     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:08:21.123     07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:08:21.123     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:08:21.123     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:08:21.123     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:08:21.123     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:08:21.123     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:08:21.123  Attaching to 0000:00:10.0
00:08:21.123  Attached to 0000:00:10.0
00:08:21.381     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:08:21.381     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:08:21.381     07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:08:21.381  Attaching to 0000:00:11.0
00:08:21.381  Attached to 0000:00:11.0
00:08:21.949  QEMU NVMe Ctrl       (12340               ):       2593 I/Os completed (+2593)
00:08:21.949  QEMU NVMe Ctrl       (12341               ):       2112 I/Os completed (+2112)
00:08:21.949  
00:08:23.325  QEMU NVMe Ctrl       (12340               ):       4803 I/Os completed (+2210)
00:08:23.325  QEMU NVMe Ctrl       (12341               ):       4334 I/Os completed (+2222)
00:08:23.325  
00:08:24.259  QEMU NVMe Ctrl       (12340               ):       7130 I/Os completed (+2327)
00:08:24.259  QEMU NVMe Ctrl       (12341               ):       6663 I/Os completed (+2329)
00:08:24.259  
00:08:25.194  QEMU NVMe Ctrl       (12340               ):       9713 I/Os completed (+2583)
00:08:25.194  QEMU NVMe Ctrl       (12341               ):       9253 I/Os completed (+2590)
00:08:25.194  
00:08:26.130  QEMU NVMe Ctrl       (12340               ):      12634 I/Os completed (+2921)
00:08:26.130  QEMU NVMe Ctrl       (12341               ):      12175 I/Os completed (+2922)
00:08:26.130  
00:08:27.082  QEMU NVMe Ctrl       (12340               ):      15171 I/Os completed (+2537)
00:08:27.082  QEMU NVMe Ctrl       (12341               ):      14718 I/Os completed (+2543)
00:08:27.082  
00:08:28.017  QEMU NVMe Ctrl       (12340               ):      17483 I/Os completed (+2312)
00:08:28.017  QEMU NVMe Ctrl       (12341               ):      17034 I/Os completed (+2316)
00:08:28.017  
00:08:28.953  QEMU NVMe Ctrl       (12340               ):      20188 I/Os completed (+2705)
00:08:28.953  QEMU NVMe Ctrl       (12341               ):      19745 I/Os completed (+2711)
00:08:28.953  
00:08:30.329  QEMU NVMe Ctrl       (12340               ):      22636 I/Os completed (+2448)
00:08:30.329  QEMU NVMe Ctrl       (12341               ):      22203 I/Os completed (+2458)
00:08:30.329  
00:08:31.264  QEMU NVMe Ctrl       (12340               ):      25157 I/Os completed (+2521)
00:08:31.264  QEMU NVMe Ctrl       (12341               ):      24726 I/Os completed (+2523)
00:08:31.264  
00:08:32.200  QEMU NVMe Ctrl       (12340               ):      27714 I/Os completed (+2557)
00:08:32.200  QEMU NVMe Ctrl       (12341               ):      27284 I/Os completed (+2558)
00:08:32.200  
00:08:33.135  QEMU NVMe Ctrl       (12340               ):      30819 I/Os completed (+3105)
00:08:33.135  QEMU NVMe Ctrl       (12341               ):      30405 I/Os completed (+3121)
00:08:33.135  
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:08:33.392  [2024-11-20 07:52:49.279052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:08:33.392  Controller removed: QEMU NVMe Ctrl       (12340               )
00:08:33.392  [2024-11-20 07:52:49.280348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.280411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.280428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.280438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:08:33.392  [2024-11-20 07:52:49.281966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.282023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.282042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.282052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:08:33.392  [2024-11-20 07:52:49.307638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:08:33.392  Controller removed: QEMU NVMe Ctrl       (12341               )
00:08:33.392  [2024-11-20 07:52:49.308711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.308755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.308775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.308790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:08:33.392  [2024-11-20 07:52:49.310123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.310160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.310178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  [2024-11-20 07:52:49.310193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:33.392  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:08:33.392  EAL: Scan for (pci) bus failed.
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:08:33.392     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:08:33.649     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:08:33.649  Attaching to 0000:00:10.0
00:08:33.649  Attached to 0000:00:10.0
00:08:33.907     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:08:33.907     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:08:33.907     07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:08:33.907  Attaching to 0000:00:11.0
00:08:33.907  Attached to 0000:00:11.0
00:08:34.164  QEMU NVMe Ctrl       (12340               ):       1065 I/Os completed (+1065)
00:08:34.164  QEMU NVMe Ctrl       (12341               ):        701 I/Os completed (+701)
00:08:34.164  
00:08:35.115  QEMU NVMe Ctrl       (12340               ):       3758 I/Os completed (+2693)
00:08:35.115  QEMU NVMe Ctrl       (12341               ):       3402 I/Os completed (+2701)
00:08:35.115  
00:08:36.050  QEMU NVMe Ctrl       (12340               ):       6706 I/Os completed (+2948)
00:08:36.050  QEMU NVMe Ctrl       (12341               ):       6355 I/Os completed (+2953)
00:08:36.050  
00:08:36.985  QEMU NVMe Ctrl       (12340               ):       9194 I/Os completed (+2488)
00:08:36.985  QEMU NVMe Ctrl       (12341               ):       8854 I/Os completed (+2499)
00:08:36.985  
00:08:38.358  QEMU NVMe Ctrl       (12340               ):      11640 I/Os completed (+2446)
00:08:38.358  QEMU NVMe Ctrl       (12341               ):      11301 I/Os completed (+2447)
00:08:38.358  
00:08:38.926  QEMU NVMe Ctrl       (12340               ):      14290 I/Os completed (+2650)
00:08:38.926  QEMU NVMe Ctrl       (12341               ):      13956 I/Os completed (+2655)
00:08:38.926  
00:08:40.342  QEMU NVMe Ctrl       (12340               ):      17494 I/Os completed (+3204)
00:08:40.342  QEMU NVMe Ctrl       (12341               ):      17162 I/Os completed (+3206)
00:08:40.342  
00:08:41.275  QEMU NVMe Ctrl       (12340               ):      20719 I/Os completed (+3225)
00:08:41.275  QEMU NVMe Ctrl       (12341               ):      20394 I/Os completed (+3232)
00:08:41.275  
00:08:42.209  QEMU NVMe Ctrl       (12340               ):      23817 I/Os completed (+3098)
00:08:42.209  QEMU NVMe Ctrl       (12341               ):      23527 I/Os completed (+3133)
00:08:42.209  
00:08:43.144  QEMU NVMe Ctrl       (12340               ):      26455 I/Os completed (+2638)
00:08:43.144  QEMU NVMe Ctrl       (12341               ):      26166 I/Os completed (+2639)
00:08:43.144  
00:08:44.075  QEMU NVMe Ctrl       (12340               ):      29563 I/Os completed (+3108)
00:08:44.075  QEMU NVMe Ctrl       (12341               ):      29279 I/Os completed (+3113)
00:08:44.075  
00:08:45.025  QEMU NVMe Ctrl       (12340               ):      32877 I/Os completed (+3314)
00:08:45.025  QEMU NVMe Ctrl       (12341               ):      32589 I/Os completed (+3310)
00:08:45.025  
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:08:45.961  [2024-11-20 07:53:01.736261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:08:45.961  Controller removed: QEMU NVMe Ctrl       (12340               )
00:08:45.961  [2024-11-20 07:53:01.738102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.738158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.738176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.738191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:08:45.961  [2024-11-20 07:53:01.740416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.740475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.740494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.740510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:08:45.961  [2024-11-20 07:53:01.768386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:08:45.961  Controller removed: QEMU NVMe Ctrl       (12341               )
00:08:45.961  [2024-11-20 07:53:01.770013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.770066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.770088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.770104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:08:45.961  [2024-11-20 07:53:01.771903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.771953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.771970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  [2024-11-20 07:53:01.771984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:08:45.961  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:08:45.961  EAL: Scan for (pci) bus failed.
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:08:45.961     07:53:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:08:45.961  
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:08:46.219  Attaching to 0000:00:10.0
00:08:46.219  Attached to 0000:00:10.0
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:08:46.219     07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:08:46.219  Attaching to 0000:00:11.0
00:08:46.219  Attached to 0000:00:11.0
00:08:46.219  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:08:46.219  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:08:46.219  [2024-11-20 07:53:02.235630] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:08:58.427     07:53:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:08:58.427     07:53:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:08:58.427    07:53:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.42
00:08:58.427    07:53:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.42
00:08:58.427    07:53:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:08:58.427   07:53:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.42
00:08:58.427   07:53:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.42 2
00:08:58.427  remove_attach_helper took 43.42s to complete (handling 2 nvme drive(s)) 07:53:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 63995
00:09:04.987  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (63995) - No such process
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 63995
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=64537
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 64537
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 64537 ']'
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:04.987  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:04.987  [2024-11-20 07:53:20.300317] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization...
00:09:04.987  [2024-11-20 07:53:20.300447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64537 ]
00:09:04.987  [2024-11-20 07:53:20.438245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:04.987  [2024-11-20 07:53:20.508589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:04.987   07:53:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:09:04.987   07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:09:04.987    07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:09:04.987    07:53:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:09:04.987    07:53:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:09:04.987    07:53:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:09:04.987    07:53:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:09:04.987     07:53:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:09:04.987     07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:09:04.987     07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:09:04.987     07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:09:04.987     07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:09:04.987     07:53:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:09:11.654     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:11.654      07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:11.654      07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:11.654      07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:11.654       07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:11.654       07:53:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:11.654       07:53:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:11.654  [2024-11-20 07:53:26.924049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:11.654  [2024-11-20 07:53:26.925319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.654  [2024-11-20 07:53:26.925353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.654  [2024-11-20 07:53:26.925367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.654  [2024-11-20 07:53:26.925387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.654  [2024-11-20 07:53:26.925397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:26.925407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.655  [2024-11-20 07:53:26.925417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.655  [2024-11-20 07:53:26.925426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:26.925435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.655  [2024-11-20 07:53:26.925445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.655  [2024-11-20 07:53:26.925454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:26.925463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.655       07:53:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:11.655     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:09:11.655     07:53:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:09:11.655     07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:09:11.655     07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:11.655      07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:11.655      07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:11.655       07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:11.655      07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:11.655       07:53:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:11.655       07:53:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:11.655       07:53:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:11.655     07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:09:11.655     07:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:09:11.655  [2024-11-20 07:53:27.624037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:11.655  [2024-11-20 07:53:27.625432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.655  [2024-11-20 07:53:27.625465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:27.625480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.655  [2024-11-20 07:53:27.625500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.655  [2024-11-20 07:53:27.625509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:27.625519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.655  [2024-11-20 07:53:27.625530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.655  [2024-11-20 07:53:27.625539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:27.625549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:11.655  [2024-11-20 07:53:27.625559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:11.655  [2024-11-20 07:53:27.625567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:11.655  [2024-11-20 07:53:27.625577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:12.221      07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:12.221      07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:12.221      07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:12.221       07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:12.221       07:53:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:12.221       07:53:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:12.221       07:53:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:12.221     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:12.479     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:12.479     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:12.479     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:12.479     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:12.479     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:12.479     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:12.736     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:12.736     07:53:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:09:24.976      07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:09:24.976      07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:24.976      07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:24.976       07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:24.976       07:53:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.976       07:53:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:24.976       07:53:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:24.976     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:24.976  [2024-11-20 07:53:40.624071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:24.976  [2024-11-20 07:53:40.625464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:24.976  [2024-11-20 07:53:40.625502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:24.976  [2024-11-20 07:53:40.625517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:24.976  [2024-11-20 07:53:40.625537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:24.976  [2024-11-20 07:53:40.625547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:24.977  [2024-11-20 07:53:40.625556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:24.977  [2024-11-20 07:53:40.625567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:24.977  [2024-11-20 07:53:40.625576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:24.977  [2024-11-20 07:53:40.625585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:24.977  [2024-11-20 07:53:40.625595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:24.977  [2024-11-20 07:53:40.625603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:24.977  [2024-11-20 07:53:40.625612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:24.977     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:09:24.977     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:24.977      07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:24.977      07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:24.977      07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:24.977       07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:24.977       07:53:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.977       07:53:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:24.977       07:53:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.977     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:09:24.977     07:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:09:25.270  [2024-11-20 07:53:41.024065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:25.270  [2024-11-20 07:53:41.025417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:25.270  [2024-11-20 07:53:41.025470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:25.270  [2024-11-20 07:53:41.025485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:25.270  [2024-11-20 07:53:41.025504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:25.270  [2024-11-20 07:53:41.025514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:25.270  [2024-11-20 07:53:41.025524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:25.270  [2024-11-20 07:53:41.025534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:25.270  [2024-11-20 07:53:41.025543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:25.270  [2024-11-20 07:53:41.025559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:25.270  [2024-11-20 07:53:41.025569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:25.270  [2024-11-20 07:53:41.025577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:25.270  [2024-11-20 07:53:41.025587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:25.270     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:09:25.270     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:25.270      07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:25.270      07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:25.270      07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:25.270       07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:25.270       07:53:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:25.270       07:53:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:25.270       07:53:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:25.270     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:09:25.270     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:25.539     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:25.797     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:25.797     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:25.797     07:53:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:09:38.108     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:09:38.108     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:09:38.108      07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:09:38.108      07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:38.108      07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:38.108       07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:38.108       07:53:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:38.108       07:53:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:38.108       07:53:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:38.108     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:09:38.108     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:38.108     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:38.108     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:38.108  [2024-11-20 07:53:53.724132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:38.108  [2024-11-20 07:53:53.725818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.108  [2024-11-20 07:53:53.725857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.108  [2024-11-20 07:53:53.725873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.108  [2024-11-20 07:53:53.725892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.108  [2024-11-20 07:53:53.725902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.108  [2024-11-20 07:53:53.725912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.108  [2024-11-20 07:53:53.725923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.109  [2024-11-20 07:53:53.725931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.109  [2024-11-20 07:53:53.725940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.109  [2024-11-20 07:53:53.725950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.109  [2024-11-20 07:53:53.725959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.109  [2024-11-20 07:53:53.725968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.109     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:38.109     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:38.109     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:09:38.109     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:38.109      07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:38.109      07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:38.109      07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:38.109       07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:38.109       07:53:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:38.109       07:53:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:38.109       07:53:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:38.109     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:09:38.109     07:53:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:09:38.367  [2024-11-20 07:53:54.224114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:38.367  [2024-11-20 07:53:54.225423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.367  [2024-11-20 07:53:54.225460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.367  [2024-11-20 07:53:54.225475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.367  [2024-11-20 07:53:54.225496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.367  [2024-11-20 07:53:54.225505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.367  [2024-11-20 07:53:54.225516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.367  [2024-11-20 07:53:54.225527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.367  [2024-11-20 07:53:54.225535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.367  [2024-11-20 07:53:54.225545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.367  [2024-11-20 07:53:54.225555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:38.367  [2024-11-20 07:53:54.225563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:38.367  [2024-11-20 07:53:54.225572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:38.367     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:09:38.367     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:38.367      07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:38.367      07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:38.367      07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:38.367       07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:38.367       07:53:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:38.367       07:53:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:38.367       07:53:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:38.367     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:09:38.367     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:38.625     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:38.625     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:38.625     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:38.883     07:53:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:09:51.098     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:09:51.098     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:09:51.098      07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:09:51.098      07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:51.098      07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:51.098       07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:51.098       07:54:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:51.098       07:54:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:51.098       07:54:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:51.098     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:09:51.098     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:51.098    07:54:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.06
00:09:51.098    07:54:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.06
00:09:51.098    07:54:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:09:51.098   07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.06
00:09:51.099   07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.06 2
00:09:51.099  remove_attach_helper took 46.06s to complete (handling 2 nvme drive(s)) 07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:09:51.099   07:54:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:51.099   07:54:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:51.099   07:54:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:51.099   07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:09:51.099   07:54:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:51.099   07:54:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:51.099   07:54:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:51.099   07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:09:51.099   07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:09:51.099    07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:09:51.099    07:54:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:09:51.099    07:54:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:09:51.099    07:54:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:09:51.099    07:54:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:09:51.099     07:54:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:09:51.099     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:09:51.099     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:09:51.099     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:09:51.099     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:09:51.099     07:54:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:09:57.656     07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:57.656      07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:57.656      07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:57.656       07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:57.656       07:54:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:57.656      07:54:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:57.656       07:54:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:57.656       07:54:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:57.656  [2024-11-20 07:54:13.012239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:57.656  [2024-11-20 07:54:13.013431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.013470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.013483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656  [2024-11-20 07:54:13.013500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.013509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.013518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656  [2024-11-20 07:54:13.013527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.013535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.013544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656  [2024-11-20 07:54:13.013553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.013562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.013571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:09:57.656     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:09:57.656  [2024-11-20 07:54:13.512252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:57.656  [2024-11-20 07:54:13.513421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.513463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.513476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656  [2024-11-20 07:54:13.513492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.513501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.513510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656  [2024-11-20 07:54:13.513520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.513528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.513536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656  [2024-11-20 07:54:13.513546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:57.656  [2024-11-20 07:54:13.513554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:09:57.656  [2024-11-20 07:54:13.513563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:09:57.656     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:09:57.656     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:09:57.656      07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:09:57.656      07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:09:57.656       07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:09:57.656       07:54:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:57.656       07:54:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:57.656      07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:09:57.656       07:54:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:57.656     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:09:57.656     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:57.914     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:57.914     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:57.914     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:57.914     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:57.914     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:57.914     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:57.915     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:57.915     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:58.174     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:58.174     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:58.174     07:54:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:10:10.367     07:54:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:10:10.367     07:54:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:10:10.367      07:54:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:10:10.367      07:54:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:10.367      07:54:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:10.367       07:54:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:10.367       07:54:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:10.367       07:54:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:10.367       07:54:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:10.367      07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:10.367      07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:10.367      07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:10.367       07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:10.367       07:54:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:10.367       07:54:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:10.367       07:54:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:10.367  [2024-11-20 07:54:26.112279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:10:10.367  [2024-11-20 07:54:26.113478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.367  [2024-11-20 07:54:26.113523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.367  [2024-11-20 07:54:26.113537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.367  [2024-11-20 07:54:26.113553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.367  [2024-11-20 07:54:26.113562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.367  [2024-11-20 07:54:26.113572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.367  [2024-11-20 07:54:26.113583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.367  [2024-11-20 07:54:26.113591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.367  [2024-11-20 07:54:26.113600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.367  [2024-11-20 07:54:26.113610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.367  [2024-11-20 07:54:26.113622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.367  [2024-11-20 07:54:26.113631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:10:10.367     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:10.625  [2024-11-20 07:54:26.512269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:10:10.625  [2024-11-20 07:54:26.513454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.625  [2024-11-20 07:54:26.513489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.625  [2024-11-20 07:54:26.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.625  [2024-11-20 07:54:26.513530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.625  [2024-11-20 07:54:26.513540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.625  [2024-11-20 07:54:26.513555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.625  [2024-11-20 07:54:26.513564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.625  [2024-11-20 07:54:26.513572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.625  [2024-11-20 07:54:26.513581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.625  [2024-11-20 07:54:26.513592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:10.625  [2024-11-20 07:54:26.513600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:10.625  [2024-11-20 07:54:26.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:10.625     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:10:10.625     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:10.625      07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:10.625      07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:10.625       07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:10.625      07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:10.625       07:54:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:10.625       07:54:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:10.625       07:54:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:10.883     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:10:10.883     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:10:10.883     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:10.883     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:10.883     07:54:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:11.140     07:54:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:10:23.374      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:10:23.374      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:23.374      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:23.374       07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:23.374       07:54:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:23.374       07:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:23.374       07:54:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:10:23.374     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:23.374      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:23.374      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:23.375       07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:23.375       07:54:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:23.375      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:23.375       07:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:23.375       07:54:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:23.375  [2024-11-20 07:54:39.312309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:10:23.375  [2024-11-20 07:54:39.313565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.375  [2024-11-20 07:54:39.313601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.375  [2024-11-20 07:54:39.313615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.375  [2024-11-20 07:54:39.313633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.375  [2024-11-20 07:54:39.313642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.375  [2024-11-20 07:54:39.313652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.375  [2024-11-20 07:54:39.313663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.375  [2024-11-20 07:54:39.313672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.375  [2024-11-20 07:54:39.313682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.375  [2024-11-20 07:54:39.313692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.375  [2024-11-20 07:54:39.313700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.375  [2024-11-20 07:54:39.313710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.375     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:10:23.376     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:23.942  [2024-11-20 07:54:39.712322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:10:23.942  [2024-11-20 07:54:39.713600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.942  [2024-11-20 07:54:39.713637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.942  [2024-11-20 07:54:39.713651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.942  [2024-11-20 07:54:39.713669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.942  [2024-11-20 07:54:39.713678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.942  [2024-11-20 07:54:39.713688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.942  [2024-11-20 07:54:39.713699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.942  [2024-11-20 07:54:39.713718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.942  [2024-11-20 07:54:39.713727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.942  [2024-11-20 07:54:39.713737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:23.942  [2024-11-20 07:54:39.713746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:23.942  [2024-11-20 07:54:39.713755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:23.942     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:10:23.942     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:23.942      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:23.942      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:23.942       07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:23.942       07:54:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:23.942      07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:23.942       07:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:23.942       07:54:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:23.942     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:10:23.942     07:54:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:24.201     07:54:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:10:36.424     07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:10:36.424     07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:10:36.424      07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:10:36.424      07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:36.424      07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:36.424       07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:36.424       07:54:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:36.424       07:54:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:36.424       07:54:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:36.424     07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:10:36.424     07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:36.424    07:54:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.37
00:10:36.424    07:54:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.37
00:10:36.424    07:54:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:10:36.424   07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.37
00:10:36.424   07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.37 2
00:10:36.424  remove_attach_helper took 45.37s to complete (handling 2 nvme drive(s)) 07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:10:36.424   07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 64537
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 64537 ']'
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 64537
00:10:36.424    07:54:52 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:36.424    07:54:52 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64537
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:36.424  killing process with pid 64537
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64537'
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@973 -- # kill 64537
00:10:36.424   07:54:52 sw_hotplug -- common/autotest_common.sh@978 -- # wait 64537
00:10:36.991   07:54:52 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:10:37.249  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:37.249  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:10:37.249  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:10:37.249  
00:10:37.249  real	2m24.299s
00:10:37.249  user	1m44.693s
00:10:37.249  sys	0m24.016s
00:10:37.249   07:54:53 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:37.249  ************************************
00:10:37.249  END TEST sw_hotplug
00:10:37.249  ************************************
00:10:37.249   07:54:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:37.508   07:54:53  -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]]
00:10:37.508   07:54:53  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:10:37.508   07:54:53  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@260 -- # timing_exit lib
00:10:37.508   07:54:53  -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:37.508   07:54:53  -- common/autotest_common.sh@10 -- # set +x
00:10:37.508   07:54:53  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:10:37.508   07:54:53  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:10:37.508   07:54:53  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:10:37.508   07:54:53  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:10:37.508   07:54:53  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:10:37.508   07:54:53  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:10:37.508   07:54:53  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:10:37.508   07:54:53  -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:37.508   07:54:53  -- common/autotest_common.sh@10 -- # set +x
00:10:37.508   07:54:53  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:10:37.508   07:54:53  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:10:37.508   07:54:53  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:10:37.508   07:54:53  -- common/autotest_common.sh@10 -- # set +x
00:10:39.410  INFO: APP EXITING
00:10:39.410  INFO: killing all VMs
00:10:39.410  INFO: killing vhost app
00:10:39.410  INFO: EXIT DONE
00:10:39.668  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:39.668  Waiting for block devices as requested
00:10:39.668  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:10:39.668  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:10:40.604  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:40.604  Cleaning
00:10:40.604  Removing:    /var/run/dpdk/spdk0/config
00:10:40.604  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:10:40.604  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:10:40.605  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:10:40.605  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:10:40.605  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:10:40.605  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:10:40.605  Removing:    /dev/shm/spdk_tgt_trace.pid56584
00:10:40.605  Removing:    /var/run/dpdk/spdk0
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid56431
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid56584
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid56782
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid56869
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid56896
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57006
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57016
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57196
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57274
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57350
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57442
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57521
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57554
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57590
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57659
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid57746
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58202
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58241
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58290
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58293
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58370
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58374
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58439
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58448
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58493
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58509
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58549
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58560
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58695
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58726
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58808
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid58971
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59025
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59055
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59297
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59383
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59474
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59510
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59534
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59612
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid59973
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60003
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60295
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60384
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60470
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60506
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60530
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid60549
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid61864
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid61983
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid61986
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid62005
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid62048
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid62052
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid62071
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid63342
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid63444
00:10:40.605  Removing:    /var/run/dpdk/spdk_pid64537
00:10:40.605  Clean
00:10:40.605   07:54:56  -- common/autotest_common.sh@1453 -- # return 0
00:10:40.605   07:54:56  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:10:40.605   07:54:56  -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:40.605   07:54:56  -- common/autotest_common.sh@10 -- # set +x
00:10:40.863   07:54:56  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:10:40.863   07:54:56  -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:40.863   07:54:56  -- common/autotest_common.sh@10 -- # set +x
00:10:40.863   07:54:56  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:10:40.863   07:54:56  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:10:40.863   07:54:56  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:10:40.863   07:54:56  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:10:40.863    07:54:56  -- spdk/autotest.sh@398 -- # hostname
00:10:40.863   07:54:56  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:10:41.122  geninfo: WARNING: invalid characters removed from testname!
00:11:13.226   07:55:24  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:11:13.226   07:55:28  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:11:15.761   07:55:31  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:11:18.291   07:55:33  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:11:20.823   07:55:36  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:11:23.356   07:55:39  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:11:25.888   07:55:41  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:11:25.888   07:55:41  -- spdk/autorun.sh@1 -- $ timing_finish
00:11:25.888   07:55:41  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:11:25.888   07:55:41  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:11:25.888   07:55:41  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:11:25.888   07:55:41  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:11:25.888  + [[ -n 5404 ]]
00:11:25.888  + sudo kill 5404
00:11:25.899  [Pipeline] }
00:11:25.919  [Pipeline] // timeout
00:11:25.925  [Pipeline] }
00:11:25.941  [Pipeline] // stage
00:11:25.946  [Pipeline] }
00:11:25.962  [Pipeline] // catchError
00:11:25.973  [Pipeline] stage
00:11:25.976  [Pipeline] { (Stop VM)
00:11:25.991  [Pipeline] sh
00:11:26.271  + vagrant halt
00:11:30.459  ==> default: Halting domain...
00:11:37.033  [Pipeline] sh
00:11:37.312  + vagrant destroy -f
00:11:41.562  ==> default: Removing domain...
00:11:41.575  [Pipeline] sh
00:11:41.857  + mv output /var/jenkins/workspace/nvme-cmb-pmr-vg-autotest/output
00:11:41.866  [Pipeline] }
00:11:41.883  [Pipeline] // stage
00:11:41.888  [Pipeline] }
00:11:41.903  [Pipeline] // dir
00:11:41.908  [Pipeline] }
00:11:41.922  [Pipeline] // wrap
00:11:41.928  [Pipeline] }
00:11:41.941  [Pipeline] // catchError
00:11:41.951  [Pipeline] stage
00:11:41.953  [Pipeline] { (Epilogue)
00:11:41.966  [Pipeline] sh
00:11:42.247  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:11:48.819  [Pipeline] catchError
00:11:48.822  [Pipeline] {
00:11:48.835  [Pipeline] sh
00:11:49.116  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:11:49.116  Artifacts sizes are good
00:11:49.124  [Pipeline] }
00:11:49.140  [Pipeline] // catchError
00:11:49.151  [Pipeline] archiveArtifacts
00:11:49.158  Archiving artifacts
00:11:49.270  [Pipeline] cleanWs
00:11:49.282  [WS-CLEANUP] Deleting project workspace...
00:11:49.282  [WS-CLEANUP] Deferred wipeout is used...
00:11:49.288  [WS-CLEANUP] done
00:11:49.290  [Pipeline] }
00:11:49.306  [Pipeline] // stage
00:11:49.312  [Pipeline] }
00:11:49.326  [Pipeline] // node
00:11:49.331  [Pipeline] End of Pipeline
00:11:49.367  Finished: SUCCESS