00:00:00.000  Started by upstream project "autotest-per-patch" build number 132808
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.043  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.044  The recommended git tool is: git
00:00:00.044  using credential 00000000-0000-0000-0000-000000000002
00:00:00.046   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.069  Fetching changes from the remote Git repository
00:00:00.071   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.115  Using shallow fetch with depth 1
00:00:00.115  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.115   > git --version # timeout=10
00:00:00.161   > git --version # 'git version 2.39.2'
00:00:00.161  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.198  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.198   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:04.618   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:04.630   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:04.640  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:04.640   > git config core.sparsecheckout # timeout=10
00:00:04.652   > git read-tree -mu HEAD # timeout=10
00:00:04.667   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:04.694  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:04.694   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:04.840  [Pipeline] Start of Pipeline
00:00:04.855  [Pipeline] library
00:00:04.857  Loading library shm_lib@master
00:00:04.857  Library shm_lib@master is cached. Copying from home.
00:00:04.877  [Pipeline] node
00:00:04.890  Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2
00:00:04.892  [Pipeline] {
00:00:04.901  [Pipeline] catchError
00:00:04.903  [Pipeline] {
00:00:04.917  [Pipeline] wrap
00:00:04.928  [Pipeline] {
00:00:04.936  [Pipeline] stage
00:00:04.938  [Pipeline] { (Prologue)
00:00:04.958  [Pipeline] echo
00:00:04.959  Node: VM-host-SM38
00:00:04.964  [Pipeline] cleanWs
00:00:04.973  [WS-CLEANUP] Deleting project workspace...
00:00:04.973  [WS-CLEANUP] Deferred wipeout is used...
00:00:04.980  [WS-CLEANUP] done
00:00:05.227  [Pipeline] setCustomBuildProperty
00:00:05.326  [Pipeline] httpRequest
00:00:05.978  [Pipeline] echo
00:00:05.980  Sorcerer 10.211.164.112 is alive
00:00:05.990  [Pipeline] retry
00:00:05.992  [Pipeline] {
00:00:06.006  [Pipeline] httpRequest
00:00:06.011  HttpMethod: GET
00:00:06.012  URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.012  Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.014  Response Code: HTTP/1.1 200 OK
00:00:06.014  Success: Status code 200 is in the accepted range: 200,404
00:00:06.015  Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.519  [Pipeline] }
00:00:06.533  [Pipeline] // retry
00:00:06.539  [Pipeline] sh
00:00:06.832  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.844  [Pipeline] httpRequest
00:00:07.203  [Pipeline] echo
00:00:07.204  Sorcerer 10.211.164.112 is alive
00:00:07.213  [Pipeline] retry
00:00:07.214  [Pipeline] {
00:00:07.225  [Pipeline] httpRequest
00:00:07.228  HttpMethod: GET
00:00:07.229  URL: http://10.211.164.112/packages/spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz
00:00:07.229  Sending request to url: http://10.211.164.112/packages/spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz
00:00:07.230  Response Code: HTTP/1.1 200 OK
00:00:07.231  Success: Status code 200 is in the accepted range: 200,404
00:00:07.231  Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz
00:00:28.351  [Pipeline] }
00:00:28.369  [Pipeline] // retry
00:00:28.377  [Pipeline] sh
00:00:28.664  + tar --no-same-owner -xf spdk_9237e57ed842482801130dac37a326b57cf6f2cc.tar.gz
00:00:31.999  [Pipeline] sh
00:00:32.285  + git -C spdk log --oneline -n5
00:00:32.285  9237e57ed test/check_so_deps: use VERSION to look for prior tags
00:00:32.285  6584139bf build: use VERSION file for storing version
00:00:32.285  a5e6ecf28 lib/reduce: Data copy logic in thin read operations
00:00:32.285  a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair
00:00:32.286  2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting
00:00:32.303  [Pipeline] writeFile
00:00:32.314  [Pipeline] sh
00:00:32.599  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:00:32.611  [Pipeline] sh
00:00:32.896  + cat autorun-spdk.conf
00:00:32.896  SPDK_RUN_FUNCTIONAL_TEST=1
00:00:32.896  SPDK_TEST_NVME=1
00:00:32.896  SPDK_TEST_FTL=1
00:00:32.896  SPDK_TEST_ISAL=1
00:00:32.896  SPDK_RUN_ASAN=1
00:00:32.896  SPDK_RUN_UBSAN=1
00:00:32.896  SPDK_TEST_XNVME=1
00:00:32.896  SPDK_TEST_NVME_FDP=1
00:00:32.896  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:00:32.905  RUN_NIGHTLY=0
00:00:32.907  [Pipeline] }
00:00:32.921  [Pipeline] // stage
00:00:32.935  [Pipeline] stage
00:00:32.936  [Pipeline] { (Run VM)
00:00:32.949  [Pipeline] sh
00:00:33.236  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:00:33.237  + echo 'Start stage prepare_nvme.sh'
00:00:33.237  Start stage prepare_nvme.sh
00:00:33.237  + [[ -n 10 ]]
00:00:33.237  + disk_prefix=ex10
00:00:33.237  + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]]
00:00:33.237  + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]]
00:00:33.237  + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf
00:00:33.237  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:33.237  ++ SPDK_TEST_NVME=1
00:00:33.237  ++ SPDK_TEST_FTL=1
00:00:33.237  ++ SPDK_TEST_ISAL=1
00:00:33.237  ++ SPDK_RUN_ASAN=1
00:00:33.237  ++ SPDK_RUN_UBSAN=1
00:00:33.237  ++ SPDK_TEST_XNVME=1
00:00:33.237  ++ SPDK_TEST_NVME_FDP=1
00:00:33.237  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:00:33.237  ++ RUN_NIGHTLY=0
00:00:33.237  + cd /var/jenkins/workspace/nvme-vg-autotest_2
00:00:33.237  + nvme_files=()
00:00:33.237  + declare -A nvme_files
00:00:33.237  + backend_dir=/var/lib/libvirt/images/backends
00:00:33.237  + nvme_files['nvme.img']=5G
00:00:33.237  + nvme_files['nvme-cmb.img']=5G
00:00:33.237  + nvme_files['nvme-multi0.img']=4G
00:00:33.237  + nvme_files['nvme-multi1.img']=4G
00:00:33.237  + nvme_files['nvme-multi2.img']=4G
00:00:33.237  + nvme_files['nvme-openstack.img']=8G
00:00:33.237  + nvme_files['nvme-zns.img']=5G
00:00:33.237  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:00:33.237  + ((  SPDK_TEST_FTL == 1  ))
00:00:33.237  + nvme_files["nvme-ftl.img"]=6G
00:00:33.237  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:00:33.237  + nvme_files["nvme-fdp.img"]=1G
00:00:33.237  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:00:33.237  + for nvme in "${!nvme_files[@]}"
00:00:33.237  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G
00:00:33.237  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:00:33.237  + for nvme in "${!nvme_files[@]}"
00:00:33.237  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G
00:00:33.809  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc
00:00:33.809  + for nvme in "${!nvme_files[@]}"
00:00:33.809  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G
00:00:33.809  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:00:33.809  + for nvme in "${!nvme_files[@]}"
00:00:33.809  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G
00:00:34.070  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:00:34.070  + for nvme in "${!nvme_files[@]}"
00:00:34.070  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G
00:00:34.070  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:00:34.070  + for nvme in "${!nvme_files[@]}"
00:00:34.070  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G
00:00:34.070  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:00:34.070  + for nvme in "${!nvme_files[@]}"
00:00:34.070  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G
00:00:34.070  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:00:34.070  + for nvme in "${!nvme_files[@]}"
00:00:34.070  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G
00:00:34.331  Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc
00:00:34.331  + for nvme in "${!nvme_files[@]}"
00:00:34.331  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G
00:00:34.331  Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:00:34.331  ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu
00:00:34.331  + echo 'End stage prepare_nvme.sh'
00:00:34.331  End stage prepare_nvme.sh
00:00:34.344  [Pipeline] sh
00:00:34.629  + DISTRO=fedora39
00:00:34.629  + CPUS=10
00:00:34.629  + RAM=12288
00:00:34.629  + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:00:34.629  Setup: -n 10 -s 12288 -x  -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39
00:00:34.629  
00:00:34.629  DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant
00:00:34.629  SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk
00:00:34.629  VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2
00:00:34.629  HELP=0
00:00:34.629  DRY_RUN=0
00:00:34.629  NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,
00:00:34.629  NVME_DISKS_TYPE=nvme,nvme,nvme,nvme,
00:00:34.629  NVME_AUTO_CREATE=0
00:00:34.629  NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,,
00:00:34.629  NVME_CMB=,,,,
00:00:34.629  NVME_PMR=,,,,
00:00:34.629  NVME_ZNS=,,,,
00:00:34.629  NVME_MS=true,,,,
00:00:34.629  NVME_FDP=,,,on,
00:00:34.629  SPDK_VAGRANT_DISTRO=fedora39
00:00:34.629  SPDK_VAGRANT_VMCPU=10
00:00:34.629  SPDK_VAGRANT_VMRAM=12288
00:00:34.629  SPDK_VAGRANT_PROVIDER=libvirt
00:00:34.629  SPDK_VAGRANT_HTTP_PROXY=
00:00:34.629  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:00:34.629  SPDK_OPENSTACK_NETWORK=0
00:00:34.629  VAGRANT_PACKAGE_BOX=0
00:00:34.629  VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile
00:00:34.629  FORCE_DISTRO=true
00:00:34.629  VAGRANT_BOX_VERSION=
00:00:34.629  EXTRA_VAGRANTFILES=
00:00:34.629  NIC_MODEL=e1000
00:00:34.629  
00:00:34.629  mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt'
00:00:34.629  /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2
00:00:37.176  Bringing machine 'default' up with 'libvirt' provider...
00:00:37.746  ==> default: Creating image (snapshot of base box volume).
00:00:38.006  ==> default: Creating domain with the following settings...
00:00:38.006  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1733763000_d03207fe02792552e48a
00:00:38.006  ==> default:  -- Domain type:       kvm
00:00:38.006  ==> default:  -- Cpus:              10
00:00:38.006  ==> default:  -- Feature:           acpi
00:00:38.006  ==> default:  -- Feature:           apic
00:00:38.006  ==> default:  -- Feature:           pae
00:00:38.006  ==> default:  -- Memory:            12288M
00:00:38.006  ==> default:  -- Memory Backing:    hugepages: 
00:00:38.006  ==> default:  -- Management MAC:    
00:00:38.006  ==> default:  -- Loader:            
00:00:38.006  ==> default:  -- Nvram:             
00:00:38.006  ==> default:  -- Base box:          spdk/fedora39
00:00:38.006  ==> default:  -- Storage pool:      default
00:00:38.007  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733763000_d03207fe02792552e48a.img (20G)
00:00:38.007  ==> default:  -- Volume Cache:      default
00:00:38.007  ==> default:  -- Kernel:            
00:00:38.007  ==> default:  -- Initrd:            
00:00:38.007  ==> default:  -- Graphics Type:     vnc
00:00:38.007  ==> default:  -- Graphics Port:     -1
00:00:38.007  ==> default:  -- Graphics IP:       127.0.0.1
00:00:38.007  ==> default:  -- Graphics Password: Not defined
00:00:38.007  ==> default:  -- Video Type:        cirrus
00:00:38.007  ==> default:  -- Video VRAM:        9216
00:00:38.007  ==> default:  -- Sound Type:	
00:00:38.007  ==> default:  -- Keymap:            en-us
00:00:38.007  ==> default:  -- TPM Path:          
00:00:38.007  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:00:38.007  ==> default:  -- Command line args: 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:00:38.007  ==> default:     -> value=-drive, 
00:00:38.007  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 
00:00:38.007  ==> default:     -> value=-drive, 
00:00:38.007  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 
00:00:38.007  ==> default:     -> value=-drive, 
00:00:38.007  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:00:38.007  ==> default:     -> value=-drive, 
00:00:38.007  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:00:38.007  ==> default:     -> value=-drive, 
00:00:38.007  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 
00:00:38.007  ==> default:     -> value=-drive, 
00:00:38.007  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 
00:00:38.007  ==> default:     -> value=-device, 
00:00:38.007  ==> default:     -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:00:38.267  ==> default: Creating shared folders metadata...
00:00:38.267  ==> default: Starting domain.
00:00:41.568  ==> default: Waiting for domain to get an IP address...
00:00:59.757  ==> default: Waiting for SSH to become available...
00:00:59.757  ==> default: Configuring and enabling network interfaces...
00:01:03.969      default: SSH address: 192.168.121.190:22
00:01:03.969      default: SSH username: vagrant
00:01:03.969      default: SSH auth method: private key
00:01:05.888  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk
00:01:15.892  ==> default: Mounting SSHFS shared folder...
00:01:16.834  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:01:16.834  ==> default: Checking Mount..
00:01:18.219  ==> default: Folder Successfully Mounted!
00:01:18.219  
00:01:18.219    SUCCESS!
00:01:18.219  
00:01:18.219    cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use.
00:01:18.219    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:18.219    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm.
00:01:18.219  
00:01:18.228  [Pipeline] }
00:01:18.243  [Pipeline] // stage
00:01:18.252  [Pipeline] dir
00:01:18.253  Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt
00:01:18.255  [Pipeline] {
00:01:18.267  [Pipeline] catchError
00:01:18.269  [Pipeline] {
00:01:18.281  [Pipeline] sh
00:01:18.565  + vagrant ssh-config --host vagrant
00:01:18.565  + tee ssh_conf
00:01:18.565  + sed -ne '/^Host/,$p'
00:01:21.865  Host vagrant
00:01:21.865    HostName 192.168.121.190
00:01:21.865    User vagrant
00:01:21.865    Port 22
00:01:21.865    UserKnownHostsFile /dev/null
00:01:21.865    StrictHostKeyChecking no
00:01:21.865    PasswordAuthentication no
00:01:21.865    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:01:21.865    IdentitiesOnly yes
00:01:21.865    LogLevel FATAL
00:01:21.865    ForwardAgent yes
00:01:21.865    ForwardX11 yes
00:01:21.865  
00:01:21.879  [Pipeline] withEnv
00:01:21.881  [Pipeline] {
00:01:21.894  [Pipeline] sh
00:01:22.173  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash
00:01:22.173  		source /etc/os-release
00:01:22.173  		[[ -e /image.version ]] && img=$(< /image.version)
00:01:22.173  		# Minimal, systemd-like check.
00:01:22.173  		if [[ -e /.dockerenv ]]; then
00:01:22.173  			# Clear garbage from the node'\''s name:
00:01:22.173  			#  agt-er_autotest_547-896 -> autotest_547-896
00:01:22.173  			#  $HOSTNAME is the actual container id
00:01:22.173  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:01:22.174  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:01:22.174  				# We can assume this is a mount from a host where container is running,
00:01:22.174  				# so fetch its hostname to easily identify the target swarm worker.
00:01:22.174  				container="$(< /etc/hostname) ($agent)"
00:01:22.174  			else
00:01:22.174  				# Fallback
00:01:22.174  				container=$agent
00:01:22.174  			fi
00:01:22.174  		fi
00:01:22.174  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:01:22.174  '
00:01:22.443  [Pipeline] }
00:01:22.454  [Pipeline] // withEnv
00:01:22.461  [Pipeline] setCustomBuildProperty
00:01:22.472  [Pipeline] stage
00:01:22.474  [Pipeline] { (Tests)
00:01:22.487  [Pipeline] sh
00:01:22.767  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:01:23.042  [Pipeline] sh
00:01:23.328  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:01:23.602  [Pipeline] timeout
00:01:23.603  Timeout set to expire in 50 min
00:01:23.604  [Pipeline] {
00:01:23.617  [Pipeline] sh
00:01:23.902  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard'
00:01:24.531  HEAD is now at 9237e57ed test/check_so_deps: use VERSION to look for prior tags
00:01:24.546  [Pipeline] sh
00:01:24.830  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo'
00:01:25.108  [Pipeline] sh
00:01:25.394  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:01:25.672  [Pipeline] sh
00:01:25.959  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo'
00:01:26.220  ++ readlink -f spdk_repo
00:01:26.220  + DIR_ROOT=/home/vagrant/spdk_repo
00:01:26.220  + [[ -n /home/vagrant/spdk_repo ]]
00:01:26.220  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:01:26.220  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:01:26.220  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:01:26.220  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:01:26.220  + [[ -d /home/vagrant/spdk_repo/output ]]
00:01:26.220  + [[ nvme-vg-autotest == pkgdep-* ]]
00:01:26.220  + cd /home/vagrant/spdk_repo
00:01:26.220  + source /etc/os-release
00:01:26.220  ++ NAME='Fedora Linux'
00:01:26.220  ++ VERSION='39 (Cloud Edition)'
00:01:26.220  ++ ID=fedora
00:01:26.220  ++ VERSION_ID=39
00:01:26.220  ++ VERSION_CODENAME=
00:01:26.220  ++ PLATFORM_ID=platform:f39
00:01:26.220  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:01:26.220  ++ ANSI_COLOR='0;38;2;60;110;180'
00:01:26.220  ++ LOGO=fedora-logo-icon
00:01:26.220  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:01:26.220  ++ HOME_URL=https://fedoraproject.org/
00:01:26.220  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:01:26.220  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:01:26.220  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:01:26.220  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:01:26.220  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:01:26.220  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:01:26.220  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:01:26.220  ++ SUPPORT_END=2024-11-12
00:01:26.220  ++ VARIANT='Cloud Edition'
00:01:26.220  ++ VARIANT_ID=cloud
00:01:26.220  + uname -a
00:01:26.220  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:01:26.220  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:01:26.480  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:01:26.740  Hugepages
00:01:26.740  node     hugesize     free /  total
00:01:26.998  node0   1048576kB        0 /      0
00:01:26.998  node0      2048kB        0 /      0
00:01:26.998  
00:01:26.998  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:26.998  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:01:26.998  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:01:26.998  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:01:26.998  NVMe                      0000:00:12.0    1b36   0010   unknown nvme             nvme2      nvme2n1 nvme2n2 nvme2n3
00:01:26.998  NVMe                      0000:00:13.0    1b36   0010   unknown nvme             nvme3      nvme3n1
00:01:26.998  + rm -f /tmp/spdk-ld-path
00:01:26.998  + source autorun-spdk.conf
00:01:26.998  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:26.998  ++ SPDK_TEST_NVME=1
00:01:26.998  ++ SPDK_TEST_FTL=1
00:01:26.998  ++ SPDK_TEST_ISAL=1
00:01:26.998  ++ SPDK_RUN_ASAN=1
00:01:26.998  ++ SPDK_RUN_UBSAN=1
00:01:26.998  ++ SPDK_TEST_XNVME=1
00:01:26.998  ++ SPDK_TEST_NVME_FDP=1
00:01:26.998  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:26.998  ++ RUN_NIGHTLY=0
00:01:26.998  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:26.998  + [[ -n '' ]]
00:01:26.998  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:01:26.998  + for M in /var/spdk/build-*-manifest.txt
00:01:26.998  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:01:26.998  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:01:26.998  + for M in /var/spdk/build-*-manifest.txt
00:01:26.998  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:26.998  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:01:26.998  + for M in /var/spdk/build-*-manifest.txt
00:01:26.998  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:26.998  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:01:26.998  ++ uname
00:01:26.998  + [[ Linux == \L\i\n\u\x ]]
00:01:26.998  + sudo dmesg -T
00:01:26.998  + sudo dmesg --clear
00:01:26.998  + dmesg_pid=5028
00:01:26.998  + [[ Fedora Linux == FreeBSD ]]
00:01:26.998  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:26.998  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:26.998  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:26.998  + [[ -x /usr/src/fio-static/fio ]]
00:01:26.998  + sudo dmesg -Tw
00:01:26.998  + export FIO_BIN=/usr/src/fio-static/fio
00:01:26.998  + FIO_BIN=/usr/src/fio-static/fio
00:01:26.998  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:26.998  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:26.998  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:26.998  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:26.998  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:26.998  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:26.998  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:26.998  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:26.998  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:26.998    16:50:50  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:26.998   16:50:50  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:26.998    16:50:50  -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0
00:01:27.256   16:50:50  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:01:27.256   16:50:50  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:27.256     16:50:50  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:01:27.256    16:50:50  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:01:27.256     16:50:50  -- scripts/common.sh@15 -- $ shopt -s extglob
00:01:27.256     16:50:50  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:27.256     16:50:50  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:27.256     16:50:50  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:27.256      16:50:50  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:27.256      16:50:50  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:27.256      16:50:50  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:27.256      16:50:50  -- paths/export.sh@5 -- $ export PATH
00:01:27.256      16:50:50  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:27.256    16:50:50  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:01:27.256      16:50:50  -- common/autobuild_common.sh@493 -- $ date +%s
00:01:27.256     16:50:50  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733763050.XXXXXX
00:01:27.256    16:50:50  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733763050.qYgd1p
00:01:27.256    16:50:50  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:01:27.256    16:50:50  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:01:27.256    16:50:50  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:01:27.256    16:50:50  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:01:27.257    16:50:50  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:01:27.257     16:50:50  -- common/autobuild_common.sh@509 -- $ get_config_params
00:01:27.257     16:50:50  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:01:27.257     16:50:50  -- common/autotest_common.sh@10 -- $ set +x
00:01:27.257    16:50:50  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme'
00:01:27.257    16:50:50  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:01:27.257    16:50:50  -- pm/common@17 -- $ local monitor
00:01:27.257    16:50:50  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:27.257    16:50:50  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:27.257    16:50:50  -- pm/common@25 -- $ sleep 1
00:01:27.257     16:50:50  -- pm/common@21 -- $ date +%s
00:01:27.257     16:50:50  -- pm/common@21 -- $ date +%s
00:01:27.257    16:50:50  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733763050
00:01:27.257    16:50:50  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733763050
00:01:27.257  Traceback (most recent call last):
00:01:27.257    File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in <module>
00:01:27.257      import spdk.rpc as rpc  # noqa
00:01:27.257      ^^^^^^^^^^^^^^^^^^^^^^
00:01:27.257    File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in <module>
00:01:27.257      from .version import __version__
00:01:27.257  ModuleNotFoundError: No module named 'spdk.version'
00:01:27.257  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733763050_collect-cpu-load.pm.log
00:01:27.257  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733763050_collect-vmstat.pm.log
00:01:27.257  Traceback (most recent call last):
00:01:27.257    File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in <module>
00:01:27.257      import spdk.rpc as rpc  # noqa
00:01:27.257      ^^^^^^^^^^^^^^^^^^^^^^
00:01:27.257    File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in <module>
00:01:27.257      from .version import __version__
00:01:27.257  ModuleNotFoundError: No module named 'spdk.version'
00:01:28.191    16:50:51  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:01:28.191   16:50:51  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:28.191   16:50:51  -- spdk/autobuild.sh@12 -- $ umask 022
00:01:28.191   16:50:51  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:01:28.191   16:50:51  -- spdk/autobuild.sh@16 -- $ date -u
00:01:28.191  Mon Dec  9 04:50:51 PM UTC 2024
00:01:28.191   16:50:51  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:28.191  v25.01-pre-305-g9237e57ed
00:01:28.191   16:50:51  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:28.191   16:50:51  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:28.191   16:50:51  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:28.191   16:50:51  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:28.191   16:50:51  -- common/autotest_common.sh@10 -- $ set +x
00:01:28.191  ************************************
00:01:28.191  START TEST asan
00:01:28.191  ************************************
00:01:28.191  using asan
00:01:28.191   16:50:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:01:28.191  
00:01:28.191  real	0m0.000s
00:01:28.191  user	0m0.000s
00:01:28.191  sys	0m0.000s
00:01:28.191   16:50:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:28.191   16:50:51 asan -- common/autotest_common.sh@10 -- $ set +x
00:01:28.191  ************************************
00:01:28.191  END TEST asan
00:01:28.191  ************************************
00:01:28.191   16:50:51  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:28.191   16:50:51  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:28.191   16:50:51  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:28.191   16:50:51  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:28.191   16:50:51  -- common/autotest_common.sh@10 -- $ set +x
00:01:28.191  ************************************
00:01:28.191  START TEST ubsan
00:01:28.191  ************************************
00:01:28.191  using ubsan
00:01:28.191   16:50:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:01:28.191  
00:01:28.191  real	0m0.000s
00:01:28.191  user	0m0.000s
00:01:28.191  sys	0m0.000s
00:01:28.191   16:50:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:28.191  ************************************
00:01:28.191  END TEST ubsan
00:01:28.191  ************************************
00:01:28.191   16:50:51 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:01:28.449   16:50:51  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:01:28.449   16:50:51  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:01:28.449   16:50:51  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:01:28.449   16:50:51  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:28.449   16:50:51  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:28.449   16:50:51  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:01:28.449   16:50:51  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:01:28.449   16:50:51  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:01:28.449   16:50:51  -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared
00:01:28.449  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:01:28.449  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:01:28.707  Using 'verbs' RDMA provider
00:01:39.740  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:01:49.723  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:01:49.982  Creating mk/config.mk...done.
00:01:49.982  Creating mk/cc.flags.mk...done.
00:01:49.982  Type 'make' to build.
00:01:49.982   16:51:12  -- spdk/autobuild.sh@70 -- $ run_test make make -j10
00:01:49.982   16:51:13  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:49.982   16:51:13  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:49.982   16:51:13  -- common/autotest_common.sh@10 -- $ set +x
00:01:49.982  ************************************
00:01:49.982  START TEST make
00:01:49.982  ************************************
00:01:49.982   16:51:13 make -- common/autotest_common.sh@1129 -- $ make -j10
00:01:50.240  (cd /home/vagrant/spdk_repo/spdk/xnvme && \
00:01:50.240  	export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \
00:01:50.240  	meson setup builddir \
00:01:50.240  	-Dwith-libaio=enabled \
00:01:50.240  	-Dwith-liburing=enabled \
00:01:50.240  	-Dwith-libvfn=disabled \
00:01:50.240  	-Dwith-spdk=disabled \
00:01:50.240  	-Dexamples=false \
00:01:50.240  	-Dtests=false \
00:01:50.240  	-Dtools=false && \
00:01:50.240  	meson compile -C builddir && \
00:01:50.240  	cd -)
00:01:52.137  The Meson build system
00:01:52.137  Version: 1.5.0
00:01:52.137  Source dir: /home/vagrant/spdk_repo/spdk/xnvme
00:01:52.137  Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir
00:01:52.137  Build type: native build
00:01:52.137  Project name: xnvme
00:01:52.137  Project version: 0.7.5
00:01:52.137  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:01:52.137  C linker for the host machine: cc ld.bfd 2.40-14
00:01:52.137  Host machine cpu family: x86_64
00:01:52.137  Host machine cpu: x86_64
00:01:52.137  Message: host_machine.system: linux
00:01:52.137  Compiler for C supports arguments -Wno-missing-braces: YES 
00:01:52.137  Compiler for C supports arguments -Wno-cast-function-type: YES 
00:01:52.137  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:01:52.137  Run-time dependency threads found: YES
00:01:52.137  Has header "setupapi.h" : NO 
00:01:52.137  Has header "linux/blkzoned.h" : YES 
00:01:52.137  Has header "linux/blkzoned.h" : YES (cached)
00:01:52.137  Has header "libaio.h" : YES 
00:01:52.137  Library aio found: YES
00:01:52.137  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:01:52.137  Run-time dependency liburing found: YES 2.2
00:01:52.137  Dependency libvfn skipped: feature with-libvfn disabled
00:01:52.137  Found CMake: /usr/bin/cmake (3.27.7)
00:01:52.137  Run-time dependency libisal found: NO (tried pkgconfig and cmake)
00:01:52.137  Subproject spdk : skipped: feature with-spdk disabled
00:01:52.137  Run-time dependency appleframeworks found: NO (tried framework)
00:01:52.137  Run-time dependency appleframeworks found: NO (tried framework)
00:01:52.137  Library rt found: YES
00:01:52.137  Checking for function "clock_gettime" with dependency -lrt: YES 
00:01:52.137  Configuring xnvme_config.h using configuration
00:01:52.137  Configuring xnvme.spec using configuration
00:01:52.137  Run-time dependency bash-completion found: YES 2.11
00:01:52.137  Message: Bash-completions: /usr/share/bash-completion/completions
00:01:52.137  Program cp found: YES (/usr/bin/cp)
00:01:52.137  Build targets in project: 3
00:01:52.137  
00:01:52.137  xnvme 0.7.5
00:01:52.137  
00:01:52.137    Subprojects
00:01:52.137      spdk         : NO Feature 'with-spdk' disabled
00:01:52.137  
00:01:52.137    User defined options
00:01:52.137      examples     : false
00:01:52.137      tests        : false
00:01:52.137      tools        : false
00:01:52.137      with-libaio  : enabled
00:01:52.137      with-liburing: enabled
00:01:52.137      with-libvfn  : disabled
00:01:52.137      with-spdk    : disabled
00:01:52.137  
00:01:52.137  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:52.703  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir'
00:01:52.703  [1/76] Generating toolbox/xnvme-driver-script with a custom command
00:01:52.703  [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o
00:01:52.703  [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o
00:01:52.703  [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o
00:01:52.703  [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o
00:01:52.703  [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o
00:01:52.703  [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o
00:01:52.703  [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o
00:01:52.703  [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o
00:01:52.703  [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o
00:01:52.703  [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o
00:01:52.703  [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o
00:01:52.703  [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o
00:01:52.703  [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o
00:01:52.961  [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o
00:01:52.961  [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o
00:01:52.961  [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o
00:01:52.961  [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o
00:01:52.961  [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o
00:01:52.961  [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o
00:01:52.961  [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o
00:01:52.961  [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o
00:01:52.961  [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o
00:01:52.961  [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o
00:01:52.961  [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o
00:01:52.961  [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o
00:01:52.961  [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o
00:01:52.961  [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o
00:01:52.961  [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o
00:01:52.961  [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o
00:01:52.961  [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o
00:01:52.961  [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o
00:01:52.961  [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o
00:01:52.961  [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o
00:01:52.961  [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o
00:01:52.961  [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o
00:01:52.961  [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o
00:01:52.961  [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o
00:01:52.961  [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o
00:01:52.961  [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o
00:01:52.961  [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o
00:01:52.961  [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o
00:01:52.961  [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o
00:01:52.961  [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o
00:01:52.961  [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o
00:01:52.961  [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o
00:01:52.961  [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o
00:01:52.961  [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o
00:01:52.961  [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o
00:01:52.961  [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o
00:01:52.961  [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o
00:01:53.219  [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o
00:01:53.219  [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o
00:01:53.219  [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o
00:01:53.219  [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o
00:01:53.219  [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o
00:01:53.219  [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o
00:01:53.219  [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o
00:01:53.219  [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o
00:01:53.219  [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o
00:01:53.219  [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o
00:01:53.219  [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o
00:01:53.219  [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o
00:01:53.219  [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o
00:01:53.219  [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o
00:01:53.219  [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o
00:01:53.219  [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o
00:01:53.219  [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o
00:01:53.219  [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o
00:01:53.477  [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o
00:01:53.477  [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o
00:01:53.477  [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o
00:01:53.477  [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o
00:01:53.735  [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o
00:01:53.735  [75/76] Linking static target lib/libxnvme.a
00:01:53.735  [76/76] Linking target lib/libxnvme.so.0.7.5
00:01:53.735  INFO: autodetecting backend as ninja
00:01:53.735  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir
00:01:53.994  /home/vagrant/spdk_repo/spdk/xnvmebuild
00:02:00.610  The Meson build system
00:02:00.610  Version: 1.5.0
00:02:00.610  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:02:00.610  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:02:00.610  Build type: native build
00:02:00.610  Program cat found: YES (/usr/bin/cat)
00:02:00.610  Project name: DPDK
00:02:00.610  Project version: 24.03.0
00:02:00.610  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:00.610  C linker for the host machine: cc ld.bfd 2.40-14
00:02:00.610  Host machine cpu family: x86_64
00:02:00.610  Host machine cpu: x86_64
00:02:00.610  Message: ## Building in Developer Mode ##
00:02:00.610  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:00.610  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:02:00.610  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:00.610  Program python3 found: YES (/usr/bin/python3)
00:02:00.610  Program cat found: YES (/usr/bin/cat)
00:02:00.610  Compiler for C supports arguments -march=native: YES 
00:02:00.610  Checking for size of "void *" : 8 
00:02:00.610  Checking for size of "void *" : 8 (cached)
00:02:00.610  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:02:00.610  Library m found: YES
00:02:00.610  Library numa found: YES
00:02:00.610  Has header "numaif.h" : YES 
00:02:00.610  Library fdt found: NO
00:02:00.610  Library execinfo found: NO
00:02:00.610  Has header "execinfo.h" : YES 
00:02:00.610  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:00.610  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:00.610  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:00.610  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:00.610  Run-time dependency openssl found: YES 3.1.1
00:02:00.610  Run-time dependency libpcap found: YES 1.10.4
00:02:00.610  Has header "pcap.h" with dependency libpcap: YES 
00:02:00.610  Compiler for C supports arguments -Wcast-qual: YES 
00:02:00.610  Compiler for C supports arguments -Wdeprecated: YES 
00:02:00.610  Compiler for C supports arguments -Wformat: YES 
00:02:00.610  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:02:00.610  Compiler for C supports arguments -Wformat-security: NO 
00:02:00.610  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:00.610  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:00.610  Compiler for C supports arguments -Wnested-externs: YES 
00:02:00.610  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:00.610  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:00.610  Compiler for C supports arguments -Wsign-compare: YES 
00:02:00.610  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:00.610  Compiler for C supports arguments -Wundef: YES 
00:02:00.610  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:00.610  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:00.610  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:00.610  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:00.610  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:00.610  Program objdump found: YES (/usr/bin/objdump)
00:02:00.610  Compiler for C supports arguments -mavx512f: YES 
00:02:00.610  Checking if "AVX512 checking" compiles: YES 
00:02:00.610  Fetching value of define "__SSE4_2__" : 1 
00:02:00.610  Fetching value of define "__AES__" : 1 
00:02:00.610  Fetching value of define "__AVX__" : 1 
00:02:00.610  Fetching value of define "__AVX2__" : 1 
00:02:00.610  Fetching value of define "__AVX512BW__" : 1 
00:02:00.610  Fetching value of define "__AVX512CD__" : 1 
00:02:00.610  Fetching value of define "__AVX512DQ__" : 1 
00:02:00.610  Fetching value of define "__AVX512F__" : 1 
00:02:00.610  Fetching value of define "__AVX512VL__" : 1 
00:02:00.610  Fetching value of define "__PCLMUL__" : 1 
00:02:00.610  Fetching value of define "__RDRND__" : 1 
00:02:00.610  Fetching value of define "__RDSEED__" : 1 
00:02:00.610  Fetching value of define "__VPCLMULQDQ__" : 1 
00:02:00.610  Fetching value of define "__znver1__" : (undefined) 
00:02:00.610  Fetching value of define "__znver2__" : (undefined) 
00:02:00.610  Fetching value of define "__znver3__" : (undefined) 
00:02:00.610  Fetching value of define "__znver4__" : (undefined) 
00:02:00.610  Library asan found: YES
00:02:00.610  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:00.610  Message: lib/log: Defining dependency "log"
00:02:00.610  Message: lib/kvargs: Defining dependency "kvargs"
00:02:00.610  Message: lib/telemetry: Defining dependency "telemetry"
00:02:00.610  Library rt found: YES
00:02:00.610  Checking for function "getentropy" : NO 
00:02:00.610  Message: lib/eal: Defining dependency "eal"
00:02:00.610  Message: lib/ring: Defining dependency "ring"
00:02:00.610  Message: lib/rcu: Defining dependency "rcu"
00:02:00.610  Message: lib/mempool: Defining dependency "mempool"
00:02:00.610  Message: lib/mbuf: Defining dependency "mbuf"
00:02:00.610  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:00.610  Fetching value of define "__AVX512F__" : 1 (cached)
00:02:00.610  Fetching value of define "__AVX512BW__" : 1 (cached)
00:02:00.610  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:02:00.610  Fetching value of define "__AVX512VL__" : 1 (cached)
00:02:00.610  Fetching value of define "__VPCLMULQDQ__" : 1 (cached)
00:02:00.610  Compiler for C supports arguments -mpclmul: YES 
00:02:00.610  Compiler for C supports arguments -maes: YES 
00:02:00.610  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:00.610  Compiler for C supports arguments -mavx512bw: YES 
00:02:00.610  Compiler for C supports arguments -mavx512dq: YES 
00:02:00.610  Compiler for C supports arguments -mavx512vl: YES 
00:02:00.610  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:00.610  Compiler for C supports arguments -mavx2: YES 
00:02:00.610  Compiler for C supports arguments -mavx: YES 
00:02:00.610  Message: lib/net: Defining dependency "net"
00:02:00.610  Message: lib/meter: Defining dependency "meter"
00:02:00.610  Message: lib/ethdev: Defining dependency "ethdev"
00:02:00.610  Message: lib/pci: Defining dependency "pci"
00:02:00.610  Message: lib/cmdline: Defining dependency "cmdline"
00:02:00.610  Message: lib/hash: Defining dependency "hash"
00:02:00.610  Message: lib/timer: Defining dependency "timer"
00:02:00.610  Message: lib/compressdev: Defining dependency "compressdev"
00:02:00.610  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:00.610  Message: lib/dmadev: Defining dependency "dmadev"
00:02:00.610  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:00.610  Message: lib/power: Defining dependency "power"
00:02:00.610  Message: lib/reorder: Defining dependency "reorder"
00:02:00.610  Message: lib/security: Defining dependency "security"
00:02:00.610  Has header "linux/userfaultfd.h" : YES 
00:02:00.610  Has header "linux/vduse.h" : YES 
00:02:00.610  Message: lib/vhost: Defining dependency "vhost"
00:02:00.610  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:00.610  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:00.610  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:00.610  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:00.610  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:00.610  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:00.610  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:00.610  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:00.610  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:00.610  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:00.610  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:00.610  Configuring doxy-api-html.conf using configuration
00:02:00.610  Configuring doxy-api-man.conf using configuration
00:02:00.610  Program mandb found: YES (/usr/bin/mandb)
00:02:00.610  Program sphinx-build found: NO
00:02:00.610  Configuring rte_build_config.h using configuration
00:02:00.610  Message: 
00:02:00.610  =================
00:02:00.610  Applications Enabled
00:02:00.610  =================
00:02:00.610  
00:02:00.610  apps:
00:02:00.610  	
00:02:00.610  
00:02:00.610  Message: 
00:02:00.610  =================
00:02:00.610  Libraries Enabled
00:02:00.610  =================
00:02:00.610  
00:02:00.610  libs:
00:02:00.610  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:00.610  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:00.610  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:00.610  
00:02:00.610  Message: 
00:02:00.610  ===============
00:02:00.610  Drivers Enabled
00:02:00.610  ===============
00:02:00.610  
00:02:00.610  common:
00:02:00.610  	
00:02:00.610  bus:
00:02:00.610  	pci, vdev, 
00:02:00.610  mempool:
00:02:00.610  	ring, 
00:02:00.610  dma:
00:02:00.610  	
00:02:00.610  net:
00:02:00.610  	
00:02:00.610  crypto:
00:02:00.610  	
00:02:00.610  compress:
00:02:00.610  	
00:02:00.610  vdpa:
00:02:00.610  	
00:02:00.610  
00:02:00.611  Message: 
00:02:00.611  =================
00:02:00.611  Content Skipped
00:02:00.611  =================
00:02:00.611  
00:02:00.611  apps:
00:02:00.611  	dumpcap:	explicitly disabled via build config
00:02:00.611  	graph:	explicitly disabled via build config
00:02:00.611  	pdump:	explicitly disabled via build config
00:02:00.611  	proc-info:	explicitly disabled via build config
00:02:00.611  	test-acl:	explicitly disabled via build config
00:02:00.611  	test-bbdev:	explicitly disabled via build config
00:02:00.611  	test-cmdline:	explicitly disabled via build config
00:02:00.611  	test-compress-perf:	explicitly disabled via build config
00:02:00.611  	test-crypto-perf:	explicitly disabled via build config
00:02:00.611  	test-dma-perf:	explicitly disabled via build config
00:02:00.611  	test-eventdev:	explicitly disabled via build config
00:02:00.611  	test-fib:	explicitly disabled via build config
00:02:00.611  	test-flow-perf:	explicitly disabled via build config
00:02:00.611  	test-gpudev:	explicitly disabled via build config
00:02:00.611  	test-mldev:	explicitly disabled via build config
00:02:00.611  	test-pipeline:	explicitly disabled via build config
00:02:00.611  	test-pmd:	explicitly disabled via build config
00:02:00.611  	test-regex:	explicitly disabled via build config
00:02:00.611  	test-sad:	explicitly disabled via build config
00:02:00.611  	test-security-perf:	explicitly disabled via build config
00:02:00.611  	
00:02:00.611  libs:
00:02:00.611  	argparse:	explicitly disabled via build config
00:02:00.611  	metrics:	explicitly disabled via build config
00:02:00.611  	acl:	explicitly disabled via build config
00:02:00.611  	bbdev:	explicitly disabled via build config
00:02:00.611  	bitratestats:	explicitly disabled via build config
00:02:00.611  	bpf:	explicitly disabled via build config
00:02:00.611  	cfgfile:	explicitly disabled via build config
00:02:00.611  	distributor:	explicitly disabled via build config
00:02:00.611  	efd:	explicitly disabled via build config
00:02:00.611  	eventdev:	explicitly disabled via build config
00:02:00.611  	dispatcher:	explicitly disabled via build config
00:02:00.611  	gpudev:	explicitly disabled via build config
00:02:00.611  	gro:	explicitly disabled via build config
00:02:00.611  	gso:	explicitly disabled via build config
00:02:00.611  	ip_frag:	explicitly disabled via build config
00:02:00.611  	jobstats:	explicitly disabled via build config
00:02:00.611  	latencystats:	explicitly disabled via build config
00:02:00.611  	lpm:	explicitly disabled via build config
00:02:00.611  	member:	explicitly disabled via build config
00:02:00.611  	pcapng:	explicitly disabled via build config
00:02:00.611  	rawdev:	explicitly disabled via build config
00:02:00.611  	regexdev:	explicitly disabled via build config
00:02:00.611  	mldev:	explicitly disabled via build config
00:02:00.611  	rib:	explicitly disabled via build config
00:02:00.611  	sched:	explicitly disabled via build config
00:02:00.611  	stack:	explicitly disabled via build config
00:02:00.611  	ipsec:	explicitly disabled via build config
00:02:00.611  	pdcp:	explicitly disabled via build config
00:02:00.611  	fib:	explicitly disabled via build config
00:02:00.611  	port:	explicitly disabled via build config
00:02:00.611  	pdump:	explicitly disabled via build config
00:02:00.611  	table:	explicitly disabled via build config
00:02:00.611  	pipeline:	explicitly disabled via build config
00:02:00.611  	graph:	explicitly disabled via build config
00:02:00.611  	node:	explicitly disabled via build config
00:02:00.611  	
00:02:00.611  drivers:
00:02:00.611  	common/cpt:	not in enabled drivers build config
00:02:00.611  	common/dpaax:	not in enabled drivers build config
00:02:00.611  	common/iavf:	not in enabled drivers build config
00:02:00.611  	common/idpf:	not in enabled drivers build config
00:02:00.611  	common/ionic:	not in enabled drivers build config
00:02:00.611  	common/mvep:	not in enabled drivers build config
00:02:00.611  	common/octeontx:	not in enabled drivers build config
00:02:00.611  	bus/auxiliary:	not in enabled drivers build config
00:02:00.611  	bus/cdx:	not in enabled drivers build config
00:02:00.611  	bus/dpaa:	not in enabled drivers build config
00:02:00.611  	bus/fslmc:	not in enabled drivers build config
00:02:00.611  	bus/ifpga:	not in enabled drivers build config
00:02:00.611  	bus/platform:	not in enabled drivers build config
00:02:00.611  	bus/uacce:	not in enabled drivers build config
00:02:00.611  	bus/vmbus:	not in enabled drivers build config
00:02:00.611  	common/cnxk:	not in enabled drivers build config
00:02:00.611  	common/mlx5:	not in enabled drivers build config
00:02:00.611  	common/nfp:	not in enabled drivers build config
00:02:00.611  	common/nitrox:	not in enabled drivers build config
00:02:00.611  	common/qat:	not in enabled drivers build config
00:02:00.611  	common/sfc_efx:	not in enabled drivers build config
00:02:00.611  	mempool/bucket:	not in enabled drivers build config
00:02:00.611  	mempool/cnxk:	not in enabled drivers build config
00:02:00.611  	mempool/dpaa:	not in enabled drivers build config
00:02:00.611  	mempool/dpaa2:	not in enabled drivers build config
00:02:00.611  	mempool/octeontx:	not in enabled drivers build config
00:02:00.611  	mempool/stack:	not in enabled drivers build config
00:02:00.611  	dma/cnxk:	not in enabled drivers build config
00:02:00.611  	dma/dpaa:	not in enabled drivers build config
00:02:00.611  	dma/dpaa2:	not in enabled drivers build config
00:02:00.611  	dma/hisilicon:	not in enabled drivers build config
00:02:00.611  	dma/idxd:	not in enabled drivers build config
00:02:00.611  	dma/ioat:	not in enabled drivers build config
00:02:00.611  	dma/skeleton:	not in enabled drivers build config
00:02:00.611  	net/af_packet:	not in enabled drivers build config
00:02:00.611  	net/af_xdp:	not in enabled drivers build config
00:02:00.611  	net/ark:	not in enabled drivers build config
00:02:00.611  	net/atlantic:	not in enabled drivers build config
00:02:00.611  	net/avp:	not in enabled drivers build config
00:02:00.611  	net/axgbe:	not in enabled drivers build config
00:02:00.611  	net/bnx2x:	not in enabled drivers build config
00:02:00.611  	net/bnxt:	not in enabled drivers build config
00:02:00.611  	net/bonding:	not in enabled drivers build config
00:02:00.611  	net/cnxk:	not in enabled drivers build config
00:02:00.611  	net/cpfl:	not in enabled drivers build config
00:02:00.611  	net/cxgbe:	not in enabled drivers build config
00:02:00.611  	net/dpaa:	not in enabled drivers build config
00:02:00.611  	net/dpaa2:	not in enabled drivers build config
00:02:00.611  	net/e1000:	not in enabled drivers build config
00:02:00.611  	net/ena:	not in enabled drivers build config
00:02:00.611  	net/enetc:	not in enabled drivers build config
00:02:00.611  	net/enetfec:	not in enabled drivers build config
00:02:00.611  	net/enic:	not in enabled drivers build config
00:02:00.611  	net/failsafe:	not in enabled drivers build config
00:02:00.611  	net/fm10k:	not in enabled drivers build config
00:02:00.611  	net/gve:	not in enabled drivers build config
00:02:00.611  	net/hinic:	not in enabled drivers build config
00:02:00.611  	net/hns3:	not in enabled drivers build config
00:02:00.611  	net/i40e:	not in enabled drivers build config
00:02:00.611  	net/iavf:	not in enabled drivers build config
00:02:00.611  	net/ice:	not in enabled drivers build config
00:02:00.611  	net/idpf:	not in enabled drivers build config
00:02:00.611  	net/igc:	not in enabled drivers build config
00:02:00.611  	net/ionic:	not in enabled drivers build config
00:02:00.611  	net/ipn3ke:	not in enabled drivers build config
00:02:00.611  	net/ixgbe:	not in enabled drivers build config
00:02:00.611  	net/mana:	not in enabled drivers build config
00:02:00.611  	net/memif:	not in enabled drivers build config
00:02:00.611  	net/mlx4:	not in enabled drivers build config
00:02:00.611  	net/mlx5:	not in enabled drivers build config
00:02:00.611  	net/mvneta:	not in enabled drivers build config
00:02:00.611  	net/mvpp2:	not in enabled drivers build config
00:02:00.611  	net/netvsc:	not in enabled drivers build config
00:02:00.611  	net/nfb:	not in enabled drivers build config
00:02:00.611  	net/nfp:	not in enabled drivers build config
00:02:00.611  	net/ngbe:	not in enabled drivers build config
00:02:00.611  	net/null:	not in enabled drivers build config
00:02:00.611  	net/octeontx:	not in enabled drivers build config
00:02:00.611  	net/octeon_ep:	not in enabled drivers build config
00:02:00.611  	net/pcap:	not in enabled drivers build config
00:02:00.611  	net/pfe:	not in enabled drivers build config
00:02:00.611  	net/qede:	not in enabled drivers build config
00:02:00.611  	net/ring:	not in enabled drivers build config
00:02:00.611  	net/sfc:	not in enabled drivers build config
00:02:00.611  	net/softnic:	not in enabled drivers build config
00:02:00.611  	net/tap:	not in enabled drivers build config
00:02:00.611  	net/thunderx:	not in enabled drivers build config
00:02:00.611  	net/txgbe:	not in enabled drivers build config
00:02:00.611  	net/vdev_netvsc:	not in enabled drivers build config
00:02:00.611  	net/vhost:	not in enabled drivers build config
00:02:00.611  	net/virtio:	not in enabled drivers build config
00:02:00.611  	net/vmxnet3:	not in enabled drivers build config
00:02:00.611  	raw/*:	missing internal dependency, "rawdev"
00:02:00.611  	crypto/armv8:	not in enabled drivers build config
00:02:00.611  	crypto/bcmfs:	not in enabled drivers build config
00:02:00.611  	crypto/caam_jr:	not in enabled drivers build config
00:02:00.611  	crypto/ccp:	not in enabled drivers build config
00:02:00.611  	crypto/cnxk:	not in enabled drivers build config
00:02:00.611  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:00.611  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:00.611  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:00.611  	crypto/mlx5:	not in enabled drivers build config
00:02:00.611  	crypto/mvsam:	not in enabled drivers build config
00:02:00.611  	crypto/nitrox:	not in enabled drivers build config
00:02:00.611  	crypto/null:	not in enabled drivers build config
00:02:00.611  	crypto/octeontx:	not in enabled drivers build config
00:02:00.611  	crypto/openssl:	not in enabled drivers build config
00:02:00.611  	crypto/scheduler:	not in enabled drivers build config
00:02:00.611  	crypto/uadk:	not in enabled drivers build config
00:02:00.611  	crypto/virtio:	not in enabled drivers build config
00:02:00.611  	compress/isal:	not in enabled drivers build config
00:02:00.611  	compress/mlx5:	not in enabled drivers build config
00:02:00.611  	compress/nitrox:	not in enabled drivers build config
00:02:00.611  	compress/octeontx:	not in enabled drivers build config
00:02:00.611  	compress/zlib:	not in enabled drivers build config
00:02:00.611  	regex/*:	missing internal dependency, "regexdev"
00:02:00.611  	ml/*:	missing internal dependency, "mldev"
00:02:00.611  	vdpa/ifc:	not in enabled drivers build config
00:02:00.611  	vdpa/mlx5:	not in enabled drivers build config
00:02:00.611  	vdpa/nfp:	not in enabled drivers build config
00:02:00.611  	vdpa/sfc:	not in enabled drivers build config
00:02:00.611  	event/*:	missing internal dependency, "eventdev"
00:02:00.611  	baseband/*:	missing internal dependency, "bbdev"
00:02:00.611  	gpu/*:	missing internal dependency, "gpudev"
00:02:00.611  	
00:02:00.611  
00:02:00.611  Build targets in project: 84
00:02:00.611  
00:02:00.611  DPDK 24.03.0
00:02:00.611  
00:02:00.611    User defined options
00:02:00.611      buildtype          : debug
00:02:00.611      default_library    : shared
00:02:00.612      libdir             : lib
00:02:00.612      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:00.612      b_sanitize         : address
00:02:00.612      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:02:00.612      c_link_args        : 
00:02:00.612      cpu_instruction_set: native
00:02:00.612      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:02:00.612      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:02:00.612      enable_docs        : false
00:02:00.612      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:02:00.612      enable_kmods       : false
00:02:00.612      max_lcores         : 128
00:02:00.612      tests              : false
00:02:00.612  
00:02:00.612  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:01.177  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:02:01.177  [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:01.177  [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:01.177  [3/267] Linking static target lib/librte_kvargs.a
00:02:01.177  [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:01.177  [5/267] Linking static target lib/librte_log.a
00:02:01.177  [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:01.434  [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:01.434  [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:01.434  [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:01.434  [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:01.692  [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:01.692  [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:01.692  [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.692  [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:01.692  [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:01.692  [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:01.949  [17/267] Linking static target lib/librte_telemetry.a
00:02:01.949  [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:01.950  [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:01.950  [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:01.950  [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:01.950  [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:02.208  [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.208  [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:02.208  [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:02.208  [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:02.208  [27/267] Linking target lib/librte_log.so.24.1
00:02:02.208  [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:02.209  [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:02.469  [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:02.469  [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:02:02.469  [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:02.469  [33/267] Linking target lib/librte_kvargs.so.24.1
00:02:02.469  [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:02.469  [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:02.469  [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:02.726  [37/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.726  [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:02.726  [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:02.726  [40/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:02:02.726  [41/267] Linking target lib/librte_telemetry.so.24.1
00:02:02.726  [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:02.726  [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:02.726  [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:02.726  [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:02.983  [46/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:02:02.983  [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:02.983  [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:02.983  [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:02.983  [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:02.983  [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:03.241  [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:03.241  [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:03.241  [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:03.241  [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:03.241  [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:03.241  [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:03.241  [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:03.498  [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:03.498  [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:03.498  [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:03.498  [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:03.498  [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:03.498  [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:03.756  [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:03.756  [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:03.756  [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:03.756  [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:03.756  [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:03.756  [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:04.013  [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:04.013  [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:04.013  [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:04.013  [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:04.013  [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:04.013  [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:04.013  [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:04.013  [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:04.013  [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:04.270  [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:04.270  [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:04.270  [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:04.528  [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:04.529  [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:04.529  [85/267] Linking static target lib/librte_ring.a
00:02:04.529  [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:04.529  [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:04.529  [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:04.529  [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:04.786  [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:04.786  [91/267] Linking static target lib/librte_eal.a
00:02:04.786  [92/267] Linking static target lib/librte_mempool.a
00:02:04.786  [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:04.786  [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:05.044  [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:05.044  [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.044  [97/267] Linking static target lib/librte_rcu.a
00:02:05.044  [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:05.044  [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:05.044  [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:05.302  [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:05.302  [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:05.302  [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:05.302  [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:05.302  [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o
00:02:05.302  [106/267] Linking static target lib/librte_meter.a
00:02:05.302  [107/267] Linking static target lib/librte_net.a
00:02:05.302  [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:05.559  [109/267] Linking static target lib/librte_mbuf.a
00:02:05.559  [110/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.559  [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:05.559  [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:05.559  [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:05.865  [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.865  [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.865  [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.865  [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:05.865  [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:06.122  [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:06.122  [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:06.379  [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.379  [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:06.379  [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:06.379  [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:06.379  [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:06.379  [126/267] Linking static target lib/librte_pci.a
00:02:06.379  [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:06.379  [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:06.637  [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:06.637  [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:06.637  [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:06.637  [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:06.637  [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:06.637  [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:06.637  [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:02:06.637  [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.894  [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:06.894  [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:06.894  [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:06.894  [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:06.894  [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:06.894  [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:02:06.894  [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:06.894  [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:06.894  [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:06.894  [146/267] Linking static target lib/librte_cmdline.a
00:02:07.152  [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:07.152  [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:07.152  [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:02:07.409  [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:07.409  [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:07.409  [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:07.409  [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:07.409  [154/267] Linking static target lib/librte_ethdev.a
00:02:07.667  [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:07.667  [156/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:07.667  [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:07.667  [158/267] Linking static target lib/librte_timer.a
00:02:07.667  [159/267] Linking static target lib/librte_hash.a
00:02:07.667  [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:07.667  [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:07.668  [162/267] Linking static target lib/librte_compressdev.a
00:02:07.668  [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:07.925  [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:07.925  [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:07.925  [166/267] Linking static target lib/librte_dmadev.a
00:02:07.925  [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.190  [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:08.190  [169/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:08.190  [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:08.190  [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:02:08.190  [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.449  [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.449  [174/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:08.449  [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:08.449  [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:02:08.449  [177/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.449  [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.706  [179/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:08.706  [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:08.706  [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:02:08.706  [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:08.706  [183/267] Linking static target lib/librte_cryptodev.a
00:02:08.706  [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:08.706  [185/267] Linking static target lib/librte_power.a
00:02:08.965  [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:08.965  [187/267] Linking static target lib/librte_reorder.a
00:02:08.965  [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:08.965  [189/267] Linking static target lib/librte_security.a
00:02:08.965  [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:09.223  [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:09.223  [192/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:09.482  [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.482  [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:09.741  [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.741  [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:02:09.741  [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.741  [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:09.741  [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:02:09.741  [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:10.000  [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:10.258  [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:10.258  [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:10.258  [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:10.258  [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:10.517  [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:10.517  [207/267] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:10.517  [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:10.517  [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:10.517  [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:10.517  [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:10.774  [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:10.774  [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:10.774  [214/267] Linking static target drivers/librte_bus_vdev.a
00:02:10.774  [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:10.774  [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:10.774  [217/267] Linking static target drivers/librte_bus_pci.a
00:02:10.774  [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:10.774  [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:10.774  [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.032  [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.032  [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:11.032  [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:11.033  [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:11.033  [225/267] Linking static target drivers/librte_mempool_ring.a
00:02:11.290  [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.856  [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:13.229  [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:13.229  [229/267] Linking target lib/librte_eal.so.24.1
00:02:13.229  [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:02:13.229  [231/267] Linking target lib/librte_timer.so.24.1
00:02:13.229  [232/267] Linking target drivers/librte_bus_vdev.so.24.1
00:02:13.229  [233/267] Linking target lib/librte_meter.so.24.1
00:02:13.229  [234/267] Linking target lib/librte_ring.so.24.1
00:02:13.229  [235/267] Linking target lib/librte_pci.so.24.1
00:02:13.229  [236/267] Linking target lib/librte_dmadev.so.24.1
00:02:13.229  [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:02:13.488  [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:02:13.488  [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:02:13.488  [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:02:13.488  [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:02:13.488  [242/267] Linking target lib/librte_rcu.so.24.1
00:02:13.488  [243/267] Linking target lib/librte_mempool.so.24.1
00:02:13.488  [244/267] Linking target drivers/librte_bus_pci.so.24.1
00:02:13.488  [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:02:13.488  [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:02:13.488  [247/267] Linking target drivers/librte_mempool_ring.so.24.1
00:02:13.488  [248/267] Linking target lib/librte_mbuf.so.24.1
00:02:13.747  [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:02:13.747  [250/267] Linking target lib/librte_reorder.so.24.1
00:02:13.747  [251/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:13.747  [252/267] Linking target lib/librte_net.so.24.1
00:02:13.747  [253/267] Linking target lib/librte_compressdev.so.24.1
00:02:13.747  [254/267] Linking target lib/librte_cryptodev.so.24.1
00:02:13.747  [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:02:13.747  [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:02:14.006  [257/267] Linking target lib/librte_cmdline.so.24.1
00:02:14.006  [258/267] Linking target lib/librte_hash.so.24.1
00:02:14.006  [259/267] Linking target lib/librte_security.so.24.1
00:02:14.006  [260/267] Linking target lib/librte_ethdev.so.24.1
00:02:14.006  [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:02:14.006  [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:02:14.006  [263/267] Linking target lib/librte_power.so.24.1
00:02:14.938  [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:15.196  [265/267] Linking static target lib/librte_vhost.a
00:02:16.216  [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.475  [267/267] Linking target lib/librte_vhost.so.24.1
00:02:16.475  INFO: autodetecting backend as ninja
00:02:16.475  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:02:34.579    CC lib/ut/ut.o
00:02:34.579    CC lib/log/log.o
00:02:34.579    CC lib/log/log_flags.o
00:02:34.579    CC lib/log/log_deprecated.o
00:02:34.579    CC lib/ut_mock/mock.o
00:02:34.579    LIB libspdk_ut.a
00:02:34.579    LIB libspdk_log.a
00:02:34.579    SO libspdk_ut.so.2.0
00:02:34.579    LIB libspdk_ut_mock.a
00:02:34.579    SO libspdk_log.so.7.1
00:02:34.579    SO libspdk_ut_mock.so.6.0
00:02:34.579    SYMLINK libspdk_ut.so
00:02:34.579    SYMLINK libspdk_ut_mock.so
00:02:34.579    SYMLINK libspdk_log.so
00:02:34.579    CC lib/util/base64.o
00:02:34.579    CC lib/util/bit_array.o
00:02:34.579    CC lib/util/cpuset.o
00:02:34.579    CC lib/util/crc16.o
00:02:34.579    CC lib/util/crc32.o
00:02:34.579    CXX lib/trace_parser/trace.o
00:02:34.579    CC lib/dma/dma.o
00:02:34.579    CC lib/util/crc32c.o
00:02:34.579    CC lib/ioat/ioat.o
00:02:34.579    CC lib/vfio_user/host/vfio_user_pci.o
00:02:34.579    CC lib/util/crc32_ieee.o
00:02:34.579    CC lib/util/crc64.o
00:02:34.579    CC lib/util/dif.o
00:02:34.579    CC lib/util/fd.o
00:02:34.579    LIB libspdk_dma.a
00:02:34.579    SO libspdk_dma.so.5.0
00:02:34.579    CC lib/vfio_user/host/vfio_user.o
00:02:34.579    CC lib/util/fd_group.o
00:02:34.579    CC lib/util/file.o
00:02:34.579    CC lib/util/hexlify.o
00:02:34.579    SYMLINK libspdk_dma.so
00:02:34.579    CC lib/util/iov.o
00:02:34.579    LIB libspdk_ioat.a
00:02:34.579    CC lib/util/math.o
00:02:34.579    SO libspdk_ioat.so.7.0
00:02:34.579    CC lib/util/net.o
00:02:34.579    SYMLINK libspdk_ioat.so
00:02:34.579    CC lib/util/pipe.o
00:02:34.579    CC lib/util/strerror_tls.o
00:02:34.579    CC lib/util/string.o
00:02:34.579    CC lib/util/uuid.o
00:02:34.579    LIB libspdk_vfio_user.a
00:02:34.579    CC lib/util/xor.o
00:02:34.579    SO libspdk_vfio_user.so.5.0
00:02:34.579    CC lib/util/zipf.o
00:02:34.579    CC lib/util/md5.o
00:02:34.579    SYMLINK libspdk_vfio_user.so
00:02:34.838    LIB libspdk_util.a
00:02:34.839    SO libspdk_util.so.10.1
00:02:34.839    LIB libspdk_trace_parser.a
00:02:34.839    SYMLINK libspdk_util.so
00:02:34.839    SO libspdk_trace_parser.so.6.0
00:02:35.098    SYMLINK libspdk_trace_parser.so
00:02:35.098    CC lib/vmd/vmd.o
00:02:35.098    CC lib/vmd/led.o
00:02:35.098    CC lib/env_dpdk/env.o
00:02:35.098    CC lib/env_dpdk/memory.o
00:02:35.098    CC lib/env_dpdk/pci.o
00:02:35.098    CC lib/env_dpdk/init.o
00:02:35.098    CC lib/conf/conf.o
00:02:35.098    CC lib/rdma_utils/rdma_utils.o
00:02:35.098    CC lib/json/json_parse.o
00:02:35.098    CC lib/idxd/idxd.o
00:02:35.098    CC lib/json/json_util.o
00:02:35.357    LIB libspdk_conf.a
00:02:35.357    SO libspdk_conf.so.6.0
00:02:35.357    CC lib/json/json_write.o
00:02:35.357    LIB libspdk_rdma_utils.a
00:02:35.357    SYMLINK libspdk_conf.so
00:02:35.357    SO libspdk_rdma_utils.so.1.0
00:02:35.357    CC lib/env_dpdk/threads.o
00:02:35.357    SYMLINK libspdk_rdma_utils.so
00:02:35.357    CC lib/idxd/idxd_user.o
00:02:35.357    CC lib/env_dpdk/pci_ioat.o
00:02:35.357    CC lib/idxd/idxd_kernel.o
00:02:35.357    CC lib/env_dpdk/pci_virtio.o
00:02:35.357    CC lib/env_dpdk/pci_vmd.o
00:02:35.617    CC lib/env_dpdk/pci_idxd.o
00:02:35.617    CC lib/env_dpdk/pci_event.o
00:02:35.617    LIB libspdk_json.a
00:02:35.617    CC lib/env_dpdk/sigbus_handler.o
00:02:35.617    CC lib/env_dpdk/pci_dpdk.o
00:02:35.617    SO libspdk_json.so.6.0
00:02:35.617    CC lib/env_dpdk/pci_dpdk_2207.o
00:02:35.617    CC lib/env_dpdk/pci_dpdk_2211.o
00:02:35.617    LIB libspdk_idxd.a
00:02:35.617    SYMLINK libspdk_json.so
00:02:35.617    SO libspdk_idxd.so.12.1
00:02:35.617    SYMLINK libspdk_idxd.so
00:02:35.877    LIB libspdk_vmd.a
00:02:35.877    SO libspdk_vmd.so.6.0
00:02:35.877    SYMLINK libspdk_vmd.so
00:02:35.877    CC lib/rdma_provider/common.o
00:02:35.877    CC lib/rdma_provider/rdma_provider_verbs.o
00:02:35.877    CC lib/jsonrpc/jsonrpc_server.o
00:02:35.877    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:02:35.877    CC lib/jsonrpc/jsonrpc_client.o
00:02:35.877    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:02:36.137    LIB libspdk_rdma_provider.a
00:02:36.137    SO libspdk_rdma_provider.so.7.0
00:02:36.137    LIB libspdk_jsonrpc.a
00:02:36.137    SYMLINK libspdk_rdma_provider.so
00:02:36.137    SO libspdk_jsonrpc.so.6.0
00:02:36.137    SYMLINK libspdk_jsonrpc.so
00:02:36.396    LIB libspdk_env_dpdk.a
00:02:36.396    CC lib/rpc/rpc.o
00:02:36.655    SO libspdk_env_dpdk.so.15.1
00:02:36.655    SYMLINK libspdk_env_dpdk.so
00:02:36.655    LIB libspdk_rpc.a
00:02:36.655    SO libspdk_rpc.so.6.0
00:02:36.916    SYMLINK libspdk_rpc.so
00:02:36.916    CC lib/notify/notify.o
00:02:36.916    CC lib/trace/trace.o
00:02:36.916    CC lib/trace/trace_flags.o
00:02:36.916    CC lib/notify/notify_rpc.o
00:02:36.916    CC lib/trace/trace_rpc.o
00:02:36.916    CC lib/keyring/keyring.o
00:02:36.916    CC lib/keyring/keyring_rpc.o
00:02:37.176    LIB libspdk_notify.a
00:02:37.176    SO libspdk_notify.so.6.0
00:02:37.176    SYMLINK libspdk_notify.so
00:02:37.176    LIB libspdk_keyring.a
00:02:37.176    LIB libspdk_trace.a
00:02:37.176    SO libspdk_keyring.so.2.0
00:02:37.176    SO libspdk_trace.so.11.0
00:02:37.436    SYMLINK libspdk_trace.so
00:02:37.436    SYMLINK libspdk_keyring.so
00:02:37.436    CC lib/thread/thread.o
00:02:37.436    CC lib/thread/iobuf.o
00:02:37.436    CC lib/sock/sock.o
00:02:37.436    CC lib/sock/sock_rpc.o
00:02:38.005    LIB libspdk_sock.a
00:02:38.005    SO libspdk_sock.so.10.0
00:02:38.005    SYMLINK libspdk_sock.so
00:02:38.267    CC lib/nvme/nvme_ctrlr.o
00:02:38.267    CC lib/nvme/nvme_ns_cmd.o
00:02:38.267    CC lib/nvme/nvme_ctrlr_cmd.o
00:02:38.267    CC lib/nvme/nvme_fabric.o
00:02:38.267    CC lib/nvme/nvme_ns.o
00:02:38.267    CC lib/nvme/nvme_pcie_common.o
00:02:38.267    CC lib/nvme/nvme_pcie.o
00:02:38.267    CC lib/nvme/nvme_qpair.o
00:02:38.267    CC lib/nvme/nvme.o
00:02:38.837    CC lib/nvme/nvme_quirks.o
00:02:38.837    CC lib/nvme/nvme_transport.o
00:02:38.837    CC lib/nvme/nvme_discovery.o
00:02:39.097    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:02:39.097    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:02:39.097    LIB libspdk_thread.a
00:02:39.097    CC lib/nvme/nvme_tcp.o
00:02:39.097    SO libspdk_thread.so.11.0
00:02:39.097    CC lib/nvme/nvme_opal.o
00:02:39.097    CC lib/nvme/nvme_io_msg.o
00:02:39.097    SYMLINK libspdk_thread.so
00:02:39.356    CC lib/nvme/nvme_poll_group.o
00:02:39.356    CC lib/accel/accel.o
00:02:39.356    CC lib/nvme/nvme_zns.o
00:02:39.356    CC lib/nvme/nvme_stubs.o
00:02:39.614    CC lib/nvme/nvme_auth.o
00:02:39.614    CC lib/nvme/nvme_cuse.o
00:02:39.614    CC lib/nvme/nvme_rdma.o
00:02:39.873    CC lib/accel/accel_rpc.o
00:02:39.873    CC lib/accel/accel_sw.o
00:02:40.130    CC lib/blob/blobstore.o
00:02:40.130    CC lib/init/json_config.o
00:02:40.130    CC lib/virtio/virtio.o
00:02:40.130    CC lib/init/subsystem.o
00:02:40.388    CC lib/init/subsystem_rpc.o
00:02:40.388    LIB libspdk_accel.a
00:02:40.388    SO libspdk_accel.so.16.0
00:02:40.388    CC lib/virtio/virtio_vhost_user.o
00:02:40.388    SYMLINK libspdk_accel.so
00:02:40.388    CC lib/init/rpc.o
00:02:40.388    CC lib/blob/request.o
00:02:40.388    CC lib/blob/zeroes.o
00:02:40.388    CC lib/blob/blob_bs_dev.o
00:02:40.647    CC lib/virtio/virtio_vfio_user.o
00:02:40.647    CC lib/fsdev/fsdev.o
00:02:40.647    LIB libspdk_init.a
00:02:40.647    SO libspdk_init.so.6.0
00:02:40.647    CC lib/fsdev/fsdev_io.o
00:02:40.647    CC lib/virtio/virtio_pci.o
00:02:40.647    SYMLINK libspdk_init.so
00:02:40.647    CC lib/fsdev/fsdev_rpc.o
00:02:40.647    CC lib/bdev/bdev.o
00:02:40.647    CC lib/bdev/bdev_rpc.o
00:02:40.647    CC lib/bdev/bdev_zone.o
00:02:40.904    CC lib/bdev/part.o
00:02:40.904    CC lib/event/app.o
00:02:40.904    LIB libspdk_virtio.a
00:02:40.904    SO libspdk_virtio.so.7.0
00:02:40.904    SYMLINK libspdk_virtio.so
00:02:40.904    CC lib/bdev/scsi_nvme.o
00:02:40.904    CC lib/event/reactor.o
00:02:40.904    CC lib/event/log_rpc.o
00:02:41.162    CC lib/event/app_rpc.o
00:02:41.162    CC lib/event/scheduler_static.o
00:02:41.162    LIB libspdk_nvme.a
00:02:41.162    LIB libspdk_fsdev.a
00:02:41.162    SO libspdk_fsdev.so.2.0
00:02:41.162    SYMLINK libspdk_fsdev.so
00:02:41.162    SO libspdk_nvme.so.15.0
00:02:41.421    LIB libspdk_event.a
00:02:41.421    SO libspdk_event.so.14.0
00:02:41.421    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:02:41.421    SYMLINK libspdk_event.so
00:02:41.421    SYMLINK libspdk_nvme.so
00:02:41.992    LIB libspdk_fuse_dispatcher.a
00:02:42.253    SO libspdk_fuse_dispatcher.so.1.0
00:02:42.253    SYMLINK libspdk_fuse_dispatcher.so
00:02:43.635    LIB libspdk_blob.a
00:02:43.635    SO libspdk_blob.so.12.0
00:02:43.635    LIB libspdk_bdev.a
00:02:43.635    SYMLINK libspdk_blob.so
00:02:43.635    SO libspdk_bdev.so.17.0
00:02:43.635    SYMLINK libspdk_bdev.so
00:02:43.894    CC lib/lvol/lvol.o
00:02:43.895    CC lib/blobfs/blobfs.o
00:02:43.895    CC lib/blobfs/tree.o
00:02:43.895    CC lib/scsi/dev.o
00:02:43.895    CC lib/ublk/ublk.o
00:02:43.895    CC lib/ftl/ftl_core.o
00:02:43.895    CC lib/ublk/ublk_rpc.o
00:02:43.895    CC lib/ftl/ftl_init.o
00:02:43.895    CC lib/nbd/nbd.o
00:02:43.895    CC lib/nvmf/ctrlr.o
00:02:43.895    CC lib/nvmf/ctrlr_discovery.o
00:02:44.154    CC lib/scsi/lun.o
00:02:44.154    CC lib/scsi/port.o
00:02:44.154    CC lib/nbd/nbd_rpc.o
00:02:44.154    CC lib/nvmf/ctrlr_bdev.o
00:02:44.154    CC lib/ftl/ftl_layout.o
00:02:44.154    CC lib/ftl/ftl_debug.o
00:02:44.413    LIB libspdk_nbd.a
00:02:44.413    SO libspdk_nbd.so.7.0
00:02:44.413    CC lib/scsi/scsi.o
00:02:44.413    SYMLINK libspdk_nbd.so
00:02:44.413    CC lib/ftl/ftl_io.o
00:02:44.413    LIB libspdk_ublk.a
00:02:44.413    SO libspdk_ublk.so.3.0
00:02:44.413    CC lib/scsi/scsi_bdev.o
00:02:44.413    CC lib/scsi/scsi_pr.o
00:02:44.413    SYMLINK libspdk_ublk.so
00:02:44.413    CC lib/ftl/ftl_sb.o
00:02:44.673    CC lib/ftl/ftl_l2p.o
00:02:44.673    CC lib/ftl/ftl_l2p_flat.o
00:02:44.673    CC lib/ftl/ftl_nv_cache.o
00:02:44.673    CC lib/ftl/ftl_band.o
00:02:44.673    LIB libspdk_blobfs.a
00:02:44.673    SO libspdk_blobfs.so.11.0
00:02:44.673    SYMLINK libspdk_blobfs.so
00:02:44.673    CC lib/ftl/ftl_band_ops.o
00:02:44.673    LIB libspdk_lvol.a
00:02:44.673    CC lib/ftl/ftl_writer.o
00:02:44.673    CC lib/ftl/ftl_rq.o
00:02:44.673    SO libspdk_lvol.so.11.0
00:02:44.932    CC lib/ftl/ftl_reloc.o
00:02:44.932    SYMLINK libspdk_lvol.so
00:02:44.932    CC lib/ftl/ftl_l2p_cache.o
00:02:44.932    CC lib/ftl/ftl_p2l.o
00:02:44.932    CC lib/nvmf/subsystem.o
00:02:44.932    CC lib/scsi/scsi_rpc.o
00:02:44.932    CC lib/ftl/ftl_p2l_log.o
00:02:44.932    CC lib/ftl/mngt/ftl_mngt.o
00:02:45.190    CC lib/scsi/task.o
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:02:45.190    LIB libspdk_scsi.a
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_startup.o
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_md.o
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_misc.o
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:02:45.190    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:02:45.190    CC lib/nvmf/nvmf.o
00:02:45.450    SO libspdk_scsi.so.9.0
00:02:45.450    SYMLINK libspdk_scsi.so
00:02:45.450    CC lib/ftl/mngt/ftl_mngt_band.o
00:02:45.450    CC lib/nvmf/nvmf_rpc.o
00:02:45.450    CC lib/nvmf/transport.o
00:02:45.450    CC lib/nvmf/tcp.o
00:02:45.713    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:02:45.713    CC lib/iscsi/conn.o
00:02:45.713    CC lib/nvmf/stubs.o
00:02:45.713    CC lib/iscsi/init_grp.o
00:02:45.713    CC lib/iscsi/iscsi.o
00:02:45.713    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:02:45.973    CC lib/iscsi/param.o
00:02:45.973    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:02:45.973    CC lib/nvmf/mdns_server.o
00:02:46.276    CC lib/nvmf/rdma.o
00:02:46.276    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:02:46.276    CC lib/nvmf/auth.o
00:02:46.276    CC lib/iscsi/portal_grp.o
00:02:46.276    CC lib/vhost/vhost.o
00:02:46.276    CC lib/vhost/vhost_rpc.o
00:02:46.276    CC lib/vhost/vhost_scsi.o
00:02:46.537    CC lib/ftl/utils/ftl_conf.o
00:02:46.537    CC lib/ftl/utils/ftl_md.o
00:02:46.537    CC lib/iscsi/tgt_node.o
00:02:46.537    CC lib/vhost/vhost_blk.o
00:02:46.796    CC lib/ftl/utils/ftl_mempool.o
00:02:46.796    CC lib/vhost/rte_vhost_user.o
00:02:46.796    CC lib/ftl/utils/ftl_bitmap.o
00:02:47.055    CC lib/iscsi/iscsi_subsystem.o
00:02:47.055    CC lib/iscsi/iscsi_rpc.o
00:02:47.055    CC lib/ftl/utils/ftl_property.o
00:02:47.055    CC lib/iscsi/task.o
00:02:47.314    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:02:47.314    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:02:47.314    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:02:47.314    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:02:47.314    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:02:47.314    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:02:47.314    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:02:47.314    CC lib/ftl/upgrade/ftl_sb_v3.o
00:02:47.314    CC lib/ftl/upgrade/ftl_sb_v5.o
00:02:47.314    LIB libspdk_iscsi.a
00:02:47.574    CC lib/ftl/nvc/ftl_nvc_dev.o
00:02:47.574    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:02:47.574    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:02:47.574    SO libspdk_iscsi.so.8.0
00:02:47.574    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:02:47.574    CC lib/ftl/base/ftl_base_dev.o
00:02:47.574    CC lib/ftl/base/ftl_base_bdev.o
00:02:47.574    CC lib/ftl/ftl_trace.o
00:02:47.836    SYMLINK libspdk_iscsi.so
00:02:47.836    LIB libspdk_ftl.a
00:02:47.836    LIB libspdk_vhost.a
00:02:47.836    SO libspdk_vhost.so.8.0
00:02:48.097    SYMLINK libspdk_vhost.so
00:02:48.097    SO libspdk_ftl.so.9.0
00:02:48.358    SYMLINK libspdk_ftl.so
00:02:48.358    LIB libspdk_nvmf.a
00:02:48.619    SO libspdk_nvmf.so.20.0
00:02:48.879    SYMLINK libspdk_nvmf.so
00:02:49.137    CC module/env_dpdk/env_dpdk_rpc.o
00:02:49.137    CC module/keyring/linux/keyring.o
00:02:49.137    CC module/accel/dsa/accel_dsa.o
00:02:49.137    CC module/fsdev/aio/fsdev_aio.o
00:02:49.137    CC module/accel/error/accel_error.o
00:02:49.137    CC module/scheduler/dynamic/scheduler_dynamic.o
00:02:49.137    CC module/keyring/file/keyring.o
00:02:49.137    CC module/blob/bdev/blob_bdev.o
00:02:49.137    CC module/accel/ioat/accel_ioat.o
00:02:49.137    CC module/sock/posix/posix.o
00:02:49.137    LIB libspdk_env_dpdk_rpc.a
00:02:49.137    SO libspdk_env_dpdk_rpc.so.6.0
00:02:49.137    SYMLINK libspdk_env_dpdk_rpc.so
00:02:49.137    CC module/fsdev/aio/fsdev_aio_rpc.o
00:02:49.137    CC module/keyring/linux/keyring_rpc.o
00:02:49.395    CC module/keyring/file/keyring_rpc.o
00:02:49.395    CC module/accel/ioat/accel_ioat_rpc.o
00:02:49.395    CC module/accel/error/accel_error_rpc.o
00:02:49.395    LIB libspdk_scheduler_dynamic.a
00:02:49.395    SO libspdk_scheduler_dynamic.so.4.0
00:02:49.395    CC module/accel/dsa/accel_dsa_rpc.o
00:02:49.395    LIB libspdk_keyring_linux.a
00:02:49.395    SO libspdk_keyring_linux.so.1.0
00:02:49.395    LIB libspdk_keyring_file.a
00:02:49.395    LIB libspdk_blob_bdev.a
00:02:49.395    CC module/fsdev/aio/linux_aio_mgr.o
00:02:49.395    SO libspdk_keyring_file.so.2.0
00:02:49.395    SO libspdk_blob_bdev.so.12.0
00:02:49.395    SYMLINK libspdk_scheduler_dynamic.so
00:02:49.395    LIB libspdk_accel_ioat.a
00:02:49.395    LIB libspdk_accel_error.a
00:02:49.395    SO libspdk_accel_ioat.so.6.0
00:02:49.395    SYMLINK libspdk_keyring_linux.so
00:02:49.395    SYMLINK libspdk_keyring_file.so
00:02:49.395    SO libspdk_accel_error.so.2.0
00:02:49.395    LIB libspdk_accel_dsa.a
00:02:49.395    SYMLINK libspdk_blob_bdev.so
00:02:49.395    SO libspdk_accel_dsa.so.5.0
00:02:49.654    SYMLINK libspdk_accel_ioat.so
00:02:49.654    SYMLINK libspdk_accel_error.so
00:02:49.654    SYMLINK libspdk_accel_dsa.so
00:02:49.654    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:02:49.654    CC module/accel/iaa/accel_iaa.o
00:02:49.654    CC module/scheduler/gscheduler/gscheduler.o
00:02:49.654    LIB libspdk_fsdev_aio.a
00:02:49.654    SO libspdk_fsdev_aio.so.1.0
00:02:49.654    CC module/bdev/delay/vbdev_delay.o
00:02:49.654    CC module/bdev/gpt/gpt.o
00:02:49.654    CC module/bdev/error/vbdev_error.o
00:02:49.654    LIB libspdk_scheduler_dpdk_governor.a
00:02:49.654    CC module/bdev/lvol/vbdev_lvol.o
00:02:49.915    CC module/blobfs/bdev/blobfs_bdev.o
00:02:49.915    SO libspdk_scheduler_dpdk_governor.so.4.0
00:02:49.915    LIB libspdk_scheduler_gscheduler.a
00:02:49.915    SYMLINK libspdk_fsdev_aio.so
00:02:49.915    CC module/bdev/error/vbdev_error_rpc.o
00:02:49.915    SO libspdk_scheduler_gscheduler.so.4.0
00:02:49.915    SYMLINK libspdk_scheduler_dpdk_governor.so
00:02:49.915    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:02:49.915    CC module/accel/iaa/accel_iaa_rpc.o
00:02:49.915    SYMLINK libspdk_scheduler_gscheduler.so
00:02:49.915    CC module/bdev/delay/vbdev_delay_rpc.o
00:02:49.915    LIB libspdk_sock_posix.a
00:02:49.915    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:02:49.915    SO libspdk_sock_posix.so.6.0
00:02:49.915    CC module/bdev/gpt/vbdev_gpt.o
00:02:49.915    LIB libspdk_accel_iaa.a
00:02:49.915    SO libspdk_accel_iaa.so.3.0
00:02:49.915    SYMLINK libspdk_sock_posix.so
00:02:49.915    LIB libspdk_bdev_error.a
00:02:50.173    SYMLINK libspdk_accel_iaa.so
00:02:50.173    SO libspdk_bdev_error.so.6.0
00:02:50.173    LIB libspdk_blobfs_bdev.a
00:02:50.173    SO libspdk_blobfs_bdev.so.6.0
00:02:50.173    SYMLINK libspdk_bdev_error.so
00:02:50.173    LIB libspdk_bdev_delay.a
00:02:50.173    SYMLINK libspdk_blobfs_bdev.so
00:02:50.173    CC module/bdev/null/bdev_null.o
00:02:50.173    CC module/bdev/malloc/bdev_malloc.o
00:02:50.173    SO libspdk_bdev_delay.so.6.0
00:02:50.173    CC module/bdev/passthru/vbdev_passthru.o
00:02:50.173    CC module/bdev/nvme/bdev_nvme.o
00:02:50.173    LIB libspdk_bdev_gpt.a
00:02:50.173    CC module/bdev/malloc/bdev_malloc_rpc.o
00:02:50.173    SO libspdk_bdev_gpt.so.6.0
00:02:50.173    SYMLINK libspdk_bdev_delay.so
00:02:50.173    LIB libspdk_bdev_lvol.a
00:02:50.173    CC module/bdev/raid/bdev_raid.o
00:02:50.173    SYMLINK libspdk_bdev_gpt.so
00:02:50.432    SO libspdk_bdev_lvol.so.6.0
00:02:50.432    CC module/bdev/split/vbdev_split.o
00:02:50.432    SYMLINK libspdk_bdev_lvol.so
00:02:50.432    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:02:50.432    CC module/bdev/split/vbdev_split_rpc.o
00:02:50.432    CC module/bdev/zone_block/vbdev_zone_block.o
00:02:50.432    CC module/bdev/null/bdev_null_rpc.o
00:02:50.432    CC module/bdev/xnvme/bdev_xnvme.o
00:02:50.432    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:02:50.432    CC module/bdev/xnvme/bdev_xnvme_rpc.o
00:02:50.432    LIB libspdk_bdev_split.a
00:02:50.432    LIB libspdk_bdev_passthru.a
00:02:50.432    LIB libspdk_bdev_malloc.a
00:02:50.691    SO libspdk_bdev_passthru.so.6.0
00:02:50.691    SO libspdk_bdev_split.so.6.0
00:02:50.692    SO libspdk_bdev_malloc.so.6.0
00:02:50.692    LIB libspdk_bdev_null.a
00:02:50.692    SO libspdk_bdev_null.so.6.0
00:02:50.692    SYMLINK libspdk_bdev_split.so
00:02:50.692    SYMLINK libspdk_bdev_malloc.so
00:02:50.692    CC module/bdev/nvme/bdev_nvme_rpc.o
00:02:50.692    CC module/bdev/raid/bdev_raid_rpc.o
00:02:50.692    SYMLINK libspdk_bdev_passthru.so
00:02:50.692    CC module/bdev/raid/bdev_raid_sb.o
00:02:50.692    CC module/bdev/nvme/nvme_rpc.o
00:02:50.692    CC module/bdev/nvme/bdev_mdns_client.o
00:02:50.692    SYMLINK libspdk_bdev_null.so
00:02:50.692    LIB libspdk_bdev_xnvme.a
00:02:50.692    LIB libspdk_bdev_zone_block.a
00:02:50.692    SO libspdk_bdev_xnvme.so.3.0
00:02:50.692    SO libspdk_bdev_zone_block.so.6.0
00:02:50.952    SYMLINK libspdk_bdev_xnvme.so
00:02:50.952    SYMLINK libspdk_bdev_zone_block.so
00:02:50.952    CC module/bdev/raid/raid0.o
00:02:50.952    CC module/bdev/aio/bdev_aio.o
00:02:50.952    CC module/bdev/nvme/vbdev_opal.o
00:02:50.952    CC module/bdev/raid/raid1.o
00:02:50.952    CC module/bdev/ftl/bdev_ftl.o
00:02:50.952    CC module/bdev/virtio/bdev_virtio_scsi.o
00:02:50.952    CC module/bdev/iscsi/bdev_iscsi.o
00:02:51.213    CC module/bdev/ftl/bdev_ftl_rpc.o
00:02:51.213    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:02:51.213    CC module/bdev/raid/concat.o
00:02:51.213    CC module/bdev/aio/bdev_aio_rpc.o
00:02:51.213    CC module/bdev/nvme/vbdev_opal_rpc.o
00:02:51.213    CC module/bdev/virtio/bdev_virtio_blk.o
00:02:51.213    CC module/bdev/virtio/bdev_virtio_rpc.o
00:02:51.213    LIB libspdk_bdev_ftl.a
00:02:51.213    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:02:51.213    SO libspdk_bdev_ftl.so.6.0
00:02:51.473    LIB libspdk_bdev_aio.a
00:02:51.473    LIB libspdk_bdev_raid.a
00:02:51.473    SO libspdk_bdev_aio.so.6.0
00:02:51.473    SYMLINK libspdk_bdev_ftl.so
00:02:51.473    SO libspdk_bdev_raid.so.6.0
00:02:51.473    LIB libspdk_bdev_iscsi.a
00:02:51.473    SO libspdk_bdev_iscsi.so.6.0
00:02:51.473    SYMLINK libspdk_bdev_aio.so
00:02:51.473    SYMLINK libspdk_bdev_iscsi.so
00:02:51.473    SYMLINK libspdk_bdev_raid.so
00:02:51.473    LIB libspdk_bdev_virtio.a
00:02:51.473    SO libspdk_bdev_virtio.so.6.0
00:02:51.733    SYMLINK libspdk_bdev_virtio.so
00:02:53.115    LIB libspdk_bdev_nvme.a
00:02:53.115    SO libspdk_bdev_nvme.so.7.1
00:02:53.115    SYMLINK libspdk_bdev_nvme.so
00:02:53.685    CC module/event/subsystems/fsdev/fsdev.o
00:02:53.686    CC module/event/subsystems/iobuf/iobuf.o
00:02:53.686    CC module/event/subsystems/sock/sock.o
00:02:53.686    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:02:53.686    CC module/event/subsystems/scheduler/scheduler.o
00:02:53.686    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:02:53.686    CC module/event/subsystems/keyring/keyring.o
00:02:53.686    CC module/event/subsystems/vmd/vmd.o
00:02:53.686    CC module/event/subsystems/vmd/vmd_rpc.o
00:02:53.686    LIB libspdk_event_scheduler.a
00:02:53.686    LIB libspdk_event_keyring.a
00:02:53.686    LIB libspdk_event_fsdev.a
00:02:53.686    LIB libspdk_event_iobuf.a
00:02:53.686    LIB libspdk_event_sock.a
00:02:53.686    LIB libspdk_event_vmd.a
00:02:53.686    LIB libspdk_event_vhost_blk.a
00:02:53.686    SO libspdk_event_scheduler.so.4.0
00:02:53.686    SO libspdk_event_keyring.so.1.0
00:02:53.686    SO libspdk_event_fsdev.so.1.0
00:02:53.686    SO libspdk_event_sock.so.5.0
00:02:53.686    SO libspdk_event_iobuf.so.3.0
00:02:53.686    SO libspdk_event_vmd.so.6.0
00:02:53.686    SO libspdk_event_vhost_blk.so.3.0
00:02:53.686    SYMLINK libspdk_event_scheduler.so
00:02:53.686    SYMLINK libspdk_event_keyring.so
00:02:53.686    SYMLINK libspdk_event_fsdev.so
00:02:53.686    SYMLINK libspdk_event_vhost_blk.so
00:02:53.686    SYMLINK libspdk_event_iobuf.so
00:02:53.686    SYMLINK libspdk_event_sock.so
00:02:53.686    SYMLINK libspdk_event_vmd.so
00:02:53.948    CC module/event/subsystems/accel/accel.o
00:02:54.209    LIB libspdk_event_accel.a
00:02:54.209    SO libspdk_event_accel.so.6.0
00:02:54.209    SYMLINK libspdk_event_accel.so
00:02:54.470    CC module/event/subsystems/bdev/bdev.o
00:02:54.731    LIB libspdk_event_bdev.a
00:02:54.731    SO libspdk_event_bdev.so.6.0
00:02:54.731    SYMLINK libspdk_event_bdev.so
00:02:54.991    CC module/event/subsystems/nbd/nbd.o
00:02:54.991    CC module/event/subsystems/scsi/scsi.o
00:02:54.991    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:02:54.991    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:02:54.991    CC module/event/subsystems/ublk/ublk.o
00:02:54.991    LIB libspdk_event_scsi.a
00:02:54.991    LIB libspdk_event_ublk.a
00:02:54.991    SO libspdk_event_scsi.so.6.0
00:02:54.991    SO libspdk_event_ublk.so.3.0
00:02:54.991    LIB libspdk_event_nbd.a
00:02:54.991    SO libspdk_event_nbd.so.6.0
00:02:55.253    SYMLINK libspdk_event_ublk.so
00:02:55.253    SYMLINK libspdk_event_scsi.so
00:02:55.253    SYMLINK libspdk_event_nbd.so
00:02:55.253    LIB libspdk_event_nvmf.a
00:02:55.253    SO libspdk_event_nvmf.so.6.0
00:02:55.253    SYMLINK libspdk_event_nvmf.so
00:02:55.253    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:02:55.253    CC module/event/subsystems/iscsi/iscsi.o
00:02:55.513    LIB libspdk_event_vhost_scsi.a
00:02:55.513    LIB libspdk_event_iscsi.a
00:02:55.513    SO libspdk_event_vhost_scsi.so.3.0
00:02:55.513    SO libspdk_event_iscsi.so.6.0
00:02:55.513    SYMLINK libspdk_event_vhost_scsi.so
00:02:55.513    SYMLINK libspdk_event_iscsi.so
00:02:55.774    SO libspdk.so.6.0
00:02:55.774    SYMLINK libspdk.so
00:02:56.035    TEST_HEADER include/spdk/accel.h
00:02:56.035    TEST_HEADER include/spdk/accel_module.h
00:02:56.035    CC test/rpc_client/rpc_client_test.o
00:02:56.035    TEST_HEADER include/spdk/assert.h
00:02:56.035    CXX app/trace/trace.o
00:02:56.035    TEST_HEADER include/spdk/barrier.h
00:02:56.035    CC app/trace_record/trace_record.o
00:02:56.035    TEST_HEADER include/spdk/base64.h
00:02:56.035    TEST_HEADER include/spdk/bdev.h
00:02:56.035    TEST_HEADER include/spdk/bdev_module.h
00:02:56.035    TEST_HEADER include/spdk/bdev_zone.h
00:02:56.035    TEST_HEADER include/spdk/bit_array.h
00:02:56.035    TEST_HEADER include/spdk/bit_pool.h
00:02:56.035    TEST_HEADER include/spdk/blob_bdev.h
00:02:56.035    TEST_HEADER include/spdk/blobfs_bdev.h
00:02:56.035    TEST_HEADER include/spdk/blobfs.h
00:02:56.035    TEST_HEADER include/spdk/blob.h
00:02:56.035    TEST_HEADER include/spdk/conf.h
00:02:56.035    TEST_HEADER include/spdk/config.h
00:02:56.035    TEST_HEADER include/spdk/cpuset.h
00:02:56.035    TEST_HEADER include/spdk/crc16.h
00:02:56.035    TEST_HEADER include/spdk/crc32.h
00:02:56.035    TEST_HEADER include/spdk/crc64.h
00:02:56.035    TEST_HEADER include/spdk/dif.h
00:02:56.035    TEST_HEADER include/spdk/dma.h
00:02:56.035    CC app/nvmf_tgt/nvmf_main.o
00:02:56.035    TEST_HEADER include/spdk/endian.h
00:02:56.035    TEST_HEADER include/spdk/env_dpdk.h
00:02:56.035    TEST_HEADER include/spdk/env.h
00:02:56.035    TEST_HEADER include/spdk/event.h
00:02:56.035    TEST_HEADER include/spdk/fd_group.h
00:02:56.035    TEST_HEADER include/spdk/fd.h
00:02:56.035    TEST_HEADER include/spdk/file.h
00:02:56.035    TEST_HEADER include/spdk/fsdev.h
00:02:56.035    TEST_HEADER include/spdk/fsdev_module.h
00:02:56.035    TEST_HEADER include/spdk/ftl.h
00:02:56.035    TEST_HEADER include/spdk/fuse_dispatcher.h
00:02:56.035    TEST_HEADER include/spdk/gpt_spec.h
00:02:56.035    TEST_HEADER include/spdk/hexlify.h
00:02:56.035    TEST_HEADER include/spdk/histogram_data.h
00:02:56.035    TEST_HEADER include/spdk/idxd.h
00:02:56.035    TEST_HEADER include/spdk/idxd_spec.h
00:02:56.035    TEST_HEADER include/spdk/init.h
00:02:56.035    TEST_HEADER include/spdk/ioat.h
00:02:56.035    TEST_HEADER include/spdk/ioat_spec.h
00:02:56.035    TEST_HEADER include/spdk/iscsi_spec.h
00:02:56.035    TEST_HEADER include/spdk/json.h
00:02:56.035    TEST_HEADER include/spdk/jsonrpc.h
00:02:56.035    TEST_HEADER include/spdk/keyring.h
00:02:56.035    TEST_HEADER include/spdk/keyring_module.h
00:02:56.035    CC test/thread/poller_perf/poller_perf.o
00:02:56.035    TEST_HEADER include/spdk/likely.h
00:02:56.035    TEST_HEADER include/spdk/log.h
00:02:56.035    TEST_HEADER include/spdk/lvol.h
00:02:56.035    CC examples/util/zipf/zipf.o
00:02:56.035    TEST_HEADER include/spdk/md5.h
00:02:56.035    TEST_HEADER include/spdk/memory.h
00:02:56.035    TEST_HEADER include/spdk/mmio.h
00:02:56.035    TEST_HEADER include/spdk/nbd.h
00:02:56.035    CC test/dma/test_dma/test_dma.o
00:02:56.035    TEST_HEADER include/spdk/net.h
00:02:56.035    CC test/app/bdev_svc/bdev_svc.o
00:02:56.035    TEST_HEADER include/spdk/notify.h
00:02:56.035    TEST_HEADER include/spdk/nvme.h
00:02:56.035    TEST_HEADER include/spdk/nvme_intel.h
00:02:56.035    TEST_HEADER include/spdk/nvme_ocssd.h
00:02:56.035    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:02:56.035    TEST_HEADER include/spdk/nvme_spec.h
00:02:56.035    TEST_HEADER include/spdk/nvme_zns.h
00:02:56.035    TEST_HEADER include/spdk/nvmf_cmd.h
00:02:56.035    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:02:56.035    TEST_HEADER include/spdk/nvmf.h
00:02:56.035    TEST_HEADER include/spdk/nvmf_spec.h
00:02:56.035    TEST_HEADER include/spdk/nvmf_transport.h
00:02:56.035    TEST_HEADER include/spdk/opal.h
00:02:56.035    TEST_HEADER include/spdk/opal_spec.h
00:02:56.035    TEST_HEADER include/spdk/pci_ids.h
00:02:56.035    TEST_HEADER include/spdk/pipe.h
00:02:56.035    TEST_HEADER include/spdk/queue.h
00:02:56.035    TEST_HEADER include/spdk/reduce.h
00:02:56.035    TEST_HEADER include/spdk/rpc.h
00:02:56.035    TEST_HEADER include/spdk/scheduler.h
00:02:56.035    LINK rpc_client_test
00:02:56.035    TEST_HEADER include/spdk/scsi.h
00:02:56.035    TEST_HEADER include/spdk/scsi_spec.h
00:02:56.035    TEST_HEADER include/spdk/sock.h
00:02:56.035    TEST_HEADER include/spdk/stdinc.h
00:02:56.035    TEST_HEADER include/spdk/string.h
00:02:56.036    TEST_HEADER include/spdk/thread.h
00:02:56.036    TEST_HEADER include/spdk/trace.h
00:02:56.036    TEST_HEADER include/spdk/trace_parser.h
00:02:56.036    TEST_HEADER include/spdk/tree.h
00:02:56.036    TEST_HEADER include/spdk/ublk.h
00:02:56.036    TEST_HEADER include/spdk/util.h
00:02:56.036    TEST_HEADER include/spdk/uuid.h
00:02:56.036    TEST_HEADER include/spdk/version.h
00:02:56.036    TEST_HEADER include/spdk/vfio_user_pci.h
00:02:56.036    CC test/env/mem_callbacks/mem_callbacks.o
00:02:56.036    TEST_HEADER include/spdk/vfio_user_spec.h
00:02:56.036    TEST_HEADER include/spdk/vhost.h
00:02:56.036    TEST_HEADER include/spdk/vmd.h
00:02:56.036    TEST_HEADER include/spdk/xor.h
00:02:56.295    TEST_HEADER include/spdk/zipf.h
00:02:56.295    CXX test/cpp_headers/accel.o
00:02:56.295    LINK nvmf_tgt
00:02:56.295    LINK poller_perf
00:02:56.295    LINK spdk_trace_record
00:02:56.295    LINK zipf
00:02:56.295    LINK bdev_svc
00:02:56.295    LINK spdk_trace
00:02:56.295    CXX test/cpp_headers/accel_module.o
00:02:56.295    CXX test/cpp_headers/assert.o
00:02:56.555    CC examples/ioat/perf/perf.o
00:02:56.555    CC examples/interrupt_tgt/interrupt_tgt.o
00:02:56.555    CC app/iscsi_tgt/iscsi_tgt.o
00:02:56.555    CXX test/cpp_headers/barrier.o
00:02:56.555    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:02:56.555    LINK test_dma
00:02:56.555    CC app/spdk_tgt/spdk_tgt.o
00:02:56.555    LINK interrupt_tgt
00:02:56.555    CXX test/cpp_headers/base64.o
00:02:56.555    LINK ioat_perf
00:02:56.815    LINK iscsi_tgt
00:02:56.815    CC examples/sock/hello_world/hello_sock.o
00:02:56.815    LINK mem_callbacks
00:02:56.815    CC examples/thread/thread/thread_ex.o
00:02:56.815    LINK spdk_tgt
00:02:56.815    CXX test/cpp_headers/bdev.o
00:02:56.815    CC app/spdk_lspci/spdk_lspci.o
00:02:56.815    CC examples/ioat/verify/verify.o
00:02:56.815    CC test/env/vtophys/vtophys.o
00:02:56.815    CC app/spdk_nvme_perf/perf.o
00:02:56.815    LINK hello_sock
00:02:57.075    LINK thread
00:02:57.075    LINK spdk_lspci
00:02:57.075    CXX test/cpp_headers/bdev_module.o
00:02:57.075    LINK nvme_fuzz
00:02:57.075    CC test/event/event_perf/event_perf.o
00:02:57.075    LINK vtophys
00:02:57.075    LINK verify
00:02:57.075    CC examples/vmd/lsvmd/lsvmd.o
00:02:57.075    LINK event_perf
00:02:57.075    CXX test/cpp_headers/bdev_zone.o
00:02:57.334    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:02:57.334    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:02:57.334    CC test/nvme/aer/aer.o
00:02:57.334    LINK lsvmd
00:02:57.334    CC examples/idxd/perf/perf.o
00:02:57.334    CC test/accel/dif/dif.o
00:02:57.334    CXX test/cpp_headers/bit_array.o
00:02:57.334    CC test/event/reactor/reactor.o
00:02:57.334    LINK env_dpdk_post_init
00:02:57.334    CC test/blobfs/mkfs/mkfs.o
00:02:57.594    LINK reactor
00:02:57.594    CXX test/cpp_headers/bit_pool.o
00:02:57.594    CC examples/vmd/led/led.o
00:02:57.594    LINK aer
00:02:57.594    CC test/env/memory/memory_ut.o
00:02:57.594    LINK mkfs
00:02:57.594    CXX test/cpp_headers/blob_bdev.o
00:02:57.594    LINK led
00:02:57.594    LINK idxd_perf
00:02:57.594    CC test/event/reactor_perf/reactor_perf.o
00:02:57.854    CC test/nvme/reset/reset.o
00:02:57.854    LINK spdk_nvme_perf
00:02:57.854    CXX test/cpp_headers/blobfs_bdev.o
00:02:57.854    LINK reactor_perf
00:02:57.854    CC test/event/app_repeat/app_repeat.o
00:02:57.854    CC examples/accel/perf/accel_perf.o
00:02:57.854    CC examples/fsdev/hello_world/hello_fsdev.o
00:02:57.854    CXX test/cpp_headers/blobfs.o
00:02:58.182    LINK app_repeat
00:02:58.182    CC app/spdk_nvme_identify/identify.o
00:02:58.182    LINK reset
00:02:58.182    LINK dif
00:02:58.182    CXX test/cpp_headers/blob.o
00:02:58.182    CC test/lvol/esnap/esnap.o
00:02:58.182    LINK hello_fsdev
00:02:58.182    CC test/nvme/sgl/sgl.o
00:02:58.182    CXX test/cpp_headers/conf.o
00:02:58.182    CC test/event/scheduler/scheduler.o
00:02:58.182    CC test/nvme/e2edp/nvme_dp.o
00:02:58.455    CXX test/cpp_headers/config.o
00:02:58.455    CXX test/cpp_headers/cpuset.o
00:02:58.455    LINK accel_perf
00:02:58.455    LINK scheduler
00:02:58.455    LINK sgl
00:02:58.455    LINK nvme_dp
00:02:58.455    CXX test/cpp_headers/crc16.o
00:02:58.455    CC examples/blob/hello_world/hello_blob.o
00:02:58.716    CXX test/cpp_headers/crc32.o
00:02:58.716    LINK memory_ut
00:02:58.716    CXX test/cpp_headers/crc64.o
00:02:58.716    CC test/nvme/overhead/overhead.o
00:02:58.716    CC examples/nvme/hello_world/hello_world.o
00:02:58.716    LINK hello_blob
00:02:58.716    CC examples/blob/cli/blobcli.o
00:02:58.716    LINK spdk_nvme_identify
00:02:58.716    CC examples/bdev/hello_world/hello_bdev.o
00:02:58.988    CC test/env/pci/pci_ut.o
00:02:58.988    CXX test/cpp_headers/dif.o
00:02:58.988    CXX test/cpp_headers/dma.o
00:02:58.988    LINK hello_world
00:02:58.988    LINK iscsi_fuzz
00:02:58.988    LINK overhead
00:02:58.988    CC app/spdk_nvme_discover/discovery_aer.o
00:02:58.988    LINK hello_bdev
00:02:58.988    CXX test/cpp_headers/endian.o
00:02:59.318    CC app/spdk_top/spdk_top.o
00:02:59.318    CC examples/nvme/reconnect/reconnect.o
00:02:59.318    CXX test/cpp_headers/env_dpdk.o
00:02:59.318    LINK spdk_nvme_discover
00:02:59.318    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:02:59.318    CC test/nvme/err_injection/err_injection.o
00:02:59.318    LINK pci_ut
00:02:59.318    LINK blobcli
00:02:59.318    CC examples/bdev/bdevperf/bdevperf.o
00:02:59.319    CXX test/cpp_headers/env.o
00:02:59.319    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:02:59.319    CXX test/cpp_headers/event.o
00:02:59.580    LINK err_injection
00:02:59.580    CXX test/cpp_headers/fd_group.o
00:02:59.580    LINK reconnect
00:02:59.580    CC test/nvme/reserve/reserve.o
00:02:59.580    CC test/nvme/startup/startup.o
00:02:59.580    CXX test/cpp_headers/fd.o
00:02:59.580    CC test/nvme/simple_copy/simple_copy.o
00:02:59.580    CC test/nvme/connect_stress/connect_stress.o
00:02:59.841    LINK startup
00:02:59.841    CXX test/cpp_headers/file.o
00:02:59.841    LINK reserve
00:02:59.841    CC examples/nvme/nvme_manage/nvme_manage.o
00:02:59.841    LINK vhost_fuzz
00:02:59.841    LINK connect_stress
00:02:59.841    LINK simple_copy
00:02:59.841    CXX test/cpp_headers/fsdev.o
00:03:00.101    CXX test/cpp_headers/fsdev_module.o
00:03:00.101    CC test/nvme/boot_partition/boot_partition.o
00:03:00.101    CC test/app/jsoncat/jsoncat.o
00:03:00.101    CC test/app/histogram_perf/histogram_perf.o
00:03:00.101    LINK spdk_top
00:03:00.101    CC test/nvme/compliance/nvme_compliance.o
00:03:00.101    CXX test/cpp_headers/ftl.o
00:03:00.101    CC test/nvme/fused_ordering/fused_ordering.o
00:03:00.101    LINK jsoncat
00:03:00.101    LINK boot_partition
00:03:00.101    LINK histogram_perf
00:03:00.101    LINK bdevperf
00:03:00.360    LINK nvme_manage
00:03:00.360    CC app/vhost/vhost.o
00:03:00.360    LINK fused_ordering
00:03:00.360    CXX test/cpp_headers/fuse_dispatcher.o
00:03:00.360    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:00.360    CC test/app/stub/stub.o
00:03:00.360    CC test/nvme/fdp/fdp.o
00:03:00.360    LINK nvme_compliance
00:03:00.360    CC test/nvme/cuse/cuse.o
00:03:00.622    CXX test/cpp_headers/gpt_spec.o
00:03:00.622    CXX test/cpp_headers/hexlify.o
00:03:00.622    LINK vhost
00:03:00.622    LINK doorbell_aers
00:03:00.622    CC examples/nvme/arbitration/arbitration.o
00:03:00.622    LINK stub
00:03:00.622    CXX test/cpp_headers/histogram_data.o
00:03:00.622    CXX test/cpp_headers/idxd.o
00:03:00.622    CXX test/cpp_headers/idxd_spec.o
00:03:00.622    CXX test/cpp_headers/init.o
00:03:00.622    CXX test/cpp_headers/ioat.o
00:03:00.882    LINK fdp
00:03:00.882    CC app/spdk_dd/spdk_dd.o
00:03:00.882    CC app/fio/nvme/fio_plugin.o
00:03:00.882    CXX test/cpp_headers/ioat_spec.o
00:03:00.882    CXX test/cpp_headers/iscsi_spec.o
00:03:00.882    LINK arbitration
00:03:00.882    CC examples/nvme/hotplug/hotplug.o
00:03:00.882    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:00.882    CC examples/nvme/abort/abort.o
00:03:01.143    CXX test/cpp_headers/json.o
00:03:01.143    CC app/fio/bdev/fio_plugin.o
00:03:01.143    LINK cmb_copy
00:03:01.143    LINK spdk_dd
00:03:01.143    LINK hotplug
00:03:01.143    CXX test/cpp_headers/jsonrpc.o
00:03:01.143    CC test/bdev/bdevio/bdevio.o
00:03:01.404    CXX test/cpp_headers/keyring.o
00:03:01.404    CXX test/cpp_headers/keyring_module.o
00:03:01.404    LINK abort
00:03:01.404    CXX test/cpp_headers/likely.o
00:03:01.404    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:01.404    LINK spdk_nvme
00:03:01.404    CXX test/cpp_headers/log.o
00:03:01.404    CXX test/cpp_headers/lvol.o
00:03:01.404    CXX test/cpp_headers/md5.o
00:03:01.404    CXX test/cpp_headers/memory.o
00:03:01.404    CXX test/cpp_headers/mmio.o
00:03:01.404    LINK pmr_persistence
00:03:01.663    CXX test/cpp_headers/nbd.o
00:03:01.663    LINK spdk_bdev
00:03:01.663    CXX test/cpp_headers/net.o
00:03:01.663    CXX test/cpp_headers/notify.o
00:03:01.663    LINK bdevio
00:03:01.663    CXX test/cpp_headers/nvme.o
00:03:01.663    CXX test/cpp_headers/nvme_intel.o
00:03:01.663    CXX test/cpp_headers/nvme_ocssd.o
00:03:01.663    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:01.663    LINK cuse
00:03:01.663    CXX test/cpp_headers/nvme_spec.o
00:03:01.663    CXX test/cpp_headers/nvme_zns.o
00:03:01.663    CXX test/cpp_headers/nvmf_cmd.o
00:03:01.663    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:01.663    CXX test/cpp_headers/nvmf.o
00:03:01.921    CXX test/cpp_headers/nvmf_spec.o
00:03:01.921    CXX test/cpp_headers/nvmf_transport.o
00:03:01.921    CC examples/nvmf/nvmf/nvmf.o
00:03:01.921    CXX test/cpp_headers/opal.o
00:03:01.921    CXX test/cpp_headers/opal_spec.o
00:03:01.921    CXX test/cpp_headers/pci_ids.o
00:03:01.921    CXX test/cpp_headers/pipe.o
00:03:01.921    CXX test/cpp_headers/queue.o
00:03:01.921    CXX test/cpp_headers/reduce.o
00:03:01.921    CXX test/cpp_headers/rpc.o
00:03:01.921    CXX test/cpp_headers/scheduler.o
00:03:01.921    CXX test/cpp_headers/scsi.o
00:03:01.921    CXX test/cpp_headers/scsi_spec.o
00:03:02.182    CXX test/cpp_headers/sock.o
00:03:02.182    CXX test/cpp_headers/stdinc.o
00:03:02.182    CXX test/cpp_headers/string.o
00:03:02.182    CXX test/cpp_headers/thread.o
00:03:02.182    CXX test/cpp_headers/trace.o
00:03:02.182    CXX test/cpp_headers/trace_parser.o
00:03:02.182    LINK nvmf
00:03:02.182    CXX test/cpp_headers/tree.o
00:03:02.182    CXX test/cpp_headers/ublk.o
00:03:02.182    CXX test/cpp_headers/util.o
00:03:02.182    CXX test/cpp_headers/uuid.o
00:03:02.182    CXX test/cpp_headers/version.o
00:03:02.182    CXX test/cpp_headers/vfio_user_pci.o
00:03:02.182    CXX test/cpp_headers/vfio_user_spec.o
00:03:02.182    CXX test/cpp_headers/vhost.o
00:03:02.182    CXX test/cpp_headers/vmd.o
00:03:02.182    CXX test/cpp_headers/xor.o
00:03:02.445    CXX test/cpp_headers/zipf.o
00:03:03.830    LINK esnap
00:03:04.401  
00:03:04.401  real	1m14.172s
00:03:04.401  user	6m40.858s
00:03:04.401  sys	1m15.394s
00:03:04.401  ************************************
00:03:04.401  END TEST make
00:03:04.401  ************************************
00:03:04.401   16:52:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:04.401   16:52:27 make -- common/autotest_common.sh@10 -- $ set +x
00:03:04.401   16:52:27  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:03:04.401   16:52:27  -- pm/common@29 -- $ signal_monitor_resources TERM
00:03:04.401   16:52:27  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:03:04.401   16:52:27  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:04.401   16:52:27  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:03:04.401   16:52:27  -- pm/common@44 -- $ pid=5071
00:03:04.401   16:52:27  -- pm/common@50 -- $ kill -TERM 5071
00:03:04.401   16:52:27  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:04.401   16:52:27  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:03:04.401   16:52:27  -- pm/common@44 -- $ pid=5072
00:03:04.401   16:52:27  -- pm/common@50 -- $ kill -TERM 5072
00:03:04.401   16:52:27  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:03:04.401   16:52:27  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:04.401    16:52:27  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:04.401     16:52:27  -- common/autotest_common.sh@1711 -- # lcov --version
00:03:04.401     16:52:27  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:04.401    16:52:27  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:04.401    16:52:27  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:04.401    16:52:27  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:04.401    16:52:27  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:04.401    16:52:27  -- scripts/common.sh@336 -- # IFS=.-:
00:03:04.401    16:52:27  -- scripts/common.sh@336 -- # read -ra ver1
00:03:04.401    16:52:27  -- scripts/common.sh@337 -- # IFS=.-:
00:03:04.401    16:52:27  -- scripts/common.sh@337 -- # read -ra ver2
00:03:04.401    16:52:27  -- scripts/common.sh@338 -- # local 'op=<'
00:03:04.401    16:52:27  -- scripts/common.sh@340 -- # ver1_l=2
00:03:04.401    16:52:27  -- scripts/common.sh@341 -- # ver2_l=1
00:03:04.401    16:52:27  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:04.401    16:52:27  -- scripts/common.sh@344 -- # case "$op" in
00:03:04.401    16:52:27  -- scripts/common.sh@345 -- # : 1
00:03:04.401    16:52:27  -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:04.401    16:52:27  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:04.401     16:52:27  -- scripts/common.sh@365 -- # decimal 1
00:03:04.401     16:52:27  -- scripts/common.sh@353 -- # local d=1
00:03:04.401     16:52:27  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:04.401     16:52:27  -- scripts/common.sh@355 -- # echo 1
00:03:04.401    16:52:27  -- scripts/common.sh@365 -- # ver1[v]=1
00:03:04.401     16:52:27  -- scripts/common.sh@366 -- # decimal 2
00:03:04.401     16:52:27  -- scripts/common.sh@353 -- # local d=2
00:03:04.401     16:52:27  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:04.401     16:52:27  -- scripts/common.sh@355 -- # echo 2
00:03:04.401    16:52:27  -- scripts/common.sh@366 -- # ver2[v]=2
00:03:04.401    16:52:27  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:04.401    16:52:27  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:04.401    16:52:27  -- scripts/common.sh@368 -- # return 0
00:03:04.401    16:52:27  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:04.401    16:52:27  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:04.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:04.401  		--rc genhtml_branch_coverage=1
00:03:04.401  		--rc genhtml_function_coverage=1
00:03:04.401  		--rc genhtml_legend=1
00:03:04.401  		--rc geninfo_all_blocks=1
00:03:04.401  		--rc geninfo_unexecuted_blocks=1
00:03:04.401  		
00:03:04.401  		'
00:03:04.401    16:52:27  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:04.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:04.401  		--rc genhtml_branch_coverage=1
00:03:04.401  		--rc genhtml_function_coverage=1
00:03:04.401  		--rc genhtml_legend=1
00:03:04.401  		--rc geninfo_all_blocks=1
00:03:04.401  		--rc geninfo_unexecuted_blocks=1
00:03:04.401  		
00:03:04.401  		'
00:03:04.401    16:52:27  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:04.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:04.401  		--rc genhtml_branch_coverage=1
00:03:04.401  		--rc genhtml_function_coverage=1
00:03:04.401  		--rc genhtml_legend=1
00:03:04.401  		--rc geninfo_all_blocks=1
00:03:04.401  		--rc geninfo_unexecuted_blocks=1
00:03:04.401  		
00:03:04.401  		'
00:03:04.401    16:52:27  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:04.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:04.401  		--rc genhtml_branch_coverage=1
00:03:04.401  		--rc genhtml_function_coverage=1
00:03:04.401  		--rc genhtml_legend=1
00:03:04.401  		--rc geninfo_all_blocks=1
00:03:04.401  		--rc geninfo_unexecuted_blocks=1
00:03:04.401  		
00:03:04.401  		'
00:03:04.401   16:52:27  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:03:04.401     16:52:27  -- nvmf/common.sh@7 -- # uname -s
00:03:04.401    16:52:27  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:03:04.401    16:52:27  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:03:04.401    16:52:27  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:03:04.401    16:52:27  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:03:04.401    16:52:27  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:03:04.401    16:52:27  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:03:04.401    16:52:27  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:03:04.401    16:52:27  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:03:04.401    16:52:27  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:03:04.401     16:52:27  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:03:04.401    16:52:27  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c7da8d14-0c7f-44c7-8845-095521e4a89c
00:03:04.401    16:52:27  -- nvmf/common.sh@18 -- # NVME_HOSTID=c7da8d14-0c7f-44c7-8845-095521e4a89c
00:03:04.401    16:52:27  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:03:04.401    16:52:27  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:03:04.401    16:52:27  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:03:04.401    16:52:27  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:03:04.401    16:52:27  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:03:04.401     16:52:27  -- scripts/common.sh@15 -- # shopt -s extglob
00:03:04.401     16:52:27  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:03:04.401     16:52:27  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:04.401     16:52:27  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:04.401      16:52:27  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:04.401      16:52:27  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:04.401      16:52:27  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:04.401      16:52:27  -- paths/export.sh@5 -- # export PATH
00:03:04.401      16:52:27  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:04.401    16:52:27  -- nvmf/common.sh@51 -- # : 0
00:03:04.401    16:52:27  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:03:04.401    16:52:27  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:03:04.401    16:52:27  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:03:04.401    16:52:27  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:03:04.401    16:52:27  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:03:04.401    16:52:27  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:03:04.401  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:03:04.401    16:52:27  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:03:04.401    16:52:27  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:03:04.401    16:52:27  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:03:04.401   16:52:27  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:03:04.401    16:52:27  -- spdk/autotest.sh@32 -- # uname -s
00:03:04.659   16:52:27  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:03:04.659   16:52:27  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:03:04.659   16:52:27  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:03:04.659   16:52:27  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:03:04.659   16:52:27  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:03:04.659   16:52:27  -- spdk/autotest.sh@44 -- # modprobe nbd
00:03:04.659    16:52:27  -- spdk/autotest.sh@46 -- # type -P udevadm
00:03:04.659   16:52:27  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:03:04.659   16:52:27  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:03:04.659   16:52:27  -- spdk/autotest.sh@48 -- # udevadm_pid=55489
00:03:04.659   16:52:27  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:03:04.659   16:52:27  -- pm/common@17 -- # local monitor
00:03:04.659   16:52:27  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:04.659   16:52:27  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:04.659   16:52:27  -- pm/common@25 -- # sleep 1
00:03:04.659    16:52:27  -- pm/common@21 -- # date +%s
00:03:04.659    16:52:27  -- pm/common@21 -- # date +%s
00:03:04.659   16:52:27  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733763147
00:03:04.659   16:52:27  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733763147
00:03:04.659  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733763147_collect-vmstat.pm.log
00:03:04.659  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733763147_collect-cpu-load.pm.log
00:03:05.592   16:52:28  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:03:05.592   16:52:28  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:03:05.592   16:52:28  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:05.592   16:52:28  -- common/autotest_common.sh@10 -- # set +x
00:03:05.592   16:52:28  -- spdk/autotest.sh@59 -- # create_test_list
00:03:05.592   16:52:28  -- common/autotest_common.sh@752 -- # xtrace_disable
00:03:05.592   16:52:28  -- common/autotest_common.sh@10 -- # set +x
00:03:05.592     16:52:28  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:03:05.592    16:52:28  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:03:05.592   16:52:28  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:03:05.592   16:52:28  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:03:05.592   16:52:28  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:03:05.592   16:52:28  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:03:05.592    16:52:28  -- common/autotest_common.sh@1457 -- # uname
00:03:05.592   16:52:28  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:03:05.592   16:52:28  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:03:05.592    16:52:28  -- common/autotest_common.sh@1477 -- # uname
00:03:05.592   16:52:28  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:03:05.592   16:52:28  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:03:05.592   16:52:28  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:03:05.592  lcov: LCOV version 1.15
00:03:05.592   16:52:28  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:03:20.473  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:03:20.473  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:03:35.385   16:52:56  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:03:35.385   16:52:56  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:35.385   16:52:56  -- common/autotest_common.sh@10 -- # set +x
00:03:35.385   16:52:56  -- spdk/autotest.sh@78 -- # rm -f
00:03:35.385   16:52:56  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:35.385  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:35.385  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:03:35.385  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:03:35.385  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:03:35.385  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:03:35.385   16:52:57  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:03:35.385   16:52:57  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:03:35.385   16:52:57  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:03:35.385   16:52:57  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:03:35.385   16:52:57  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:03:35.385   16:52:57  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:03:35.385   16:52:57  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:03:35.385   16:52:57  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:03:35.385   16:52:57  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0
00:03:35.385   16:52:57  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2
00:03:35.385   16:52:57  -- common/autotest_common.sh@1650 -- # local device=nvme2n2
00:03:35.385   16:52:57  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3
00:03:35.385   16:52:57  -- common/autotest_common.sh@1650 -- # local device=nvme2n3
00:03:35.385   16:52:57  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0
00:03:35.385   16:52:57  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:03:35.385   16:52:57  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1
00:03:35.385   16:52:57  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]]
00:03:35.385   16:52:57  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:03:35.385   16:52:57  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:03:35.385   16:52:57  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:35.385   16:52:57  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:35.385   16:52:57  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:03:35.385   16:52:57  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:03:35.385   16:52:57  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:03:35.385  No valid GPT data, bailing
00:03:35.385    16:52:57  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:03:35.385   16:52:57  -- scripts/common.sh@394 -- # pt=
00:03:35.385   16:52:57  -- scripts/common.sh@395 -- # return 1
00:03:35.385   16:52:57  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:03:35.385  1+0 records in
00:03:35.385  1+0 records out
00:03:35.385  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106606 s, 98.4 MB/s
00:03:35.385   16:52:57  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:35.385   16:52:57  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:35.386   16:52:57  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:03:35.386   16:52:57  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:03:35.386   16:52:57  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:03:35.386  No valid GPT data, bailing
00:03:35.386    16:52:57  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:03:35.386   16:52:57  -- scripts/common.sh@394 -- # pt=
00:03:35.386   16:52:57  -- scripts/common.sh@395 -- # return 1
00:03:35.386   16:52:57  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:03:35.386  1+0 records in
00:03:35.386  1+0 records out
00:03:35.386  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467917 s, 224 MB/s
00:03:35.386   16:52:57  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:35.386   16:52:57  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:35.386   16:52:57  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1
00:03:35.386   16:52:57  -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt
00:03:35.386   16:52:57  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1
00:03:35.386  No valid GPT data, bailing
00:03:35.386    16:52:57  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1
00:03:35.386   16:52:58  -- scripts/common.sh@394 -- # pt=
00:03:35.386   16:52:58  -- scripts/common.sh@395 -- # return 1
00:03:35.386   16:52:58  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1
00:03:35.386  1+0 records in
00:03:35.386  1+0 records out
00:03:35.386  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403557 s, 260 MB/s
00:03:35.386   16:52:58  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:35.386   16:52:58  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:35.386   16:52:58  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2
00:03:35.386   16:52:58  -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt
00:03:35.386   16:52:58  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2
00:03:35.386  No valid GPT data, bailing
00:03:35.386    16:52:58  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2
00:03:35.386   16:52:58  -- scripts/common.sh@394 -- # pt=
00:03:35.386   16:52:58  -- scripts/common.sh@395 -- # return 1
00:03:35.386   16:52:58  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1
00:03:35.386  1+0 records in
00:03:35.386  1+0 records out
00:03:35.386  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482069 s, 218 MB/s
00:03:35.386   16:52:58  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:35.386   16:52:58  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:35.386   16:52:58  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3
00:03:35.386   16:52:58  -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt
00:03:35.386   16:52:58  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3
00:03:35.386  No valid GPT data, bailing
00:03:35.386    16:52:58  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3
00:03:35.386   16:52:58  -- scripts/common.sh@394 -- # pt=
00:03:35.386   16:52:58  -- scripts/common.sh@395 -- # return 1
00:03:35.386   16:52:58  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1
00:03:35.386  1+0 records in
00:03:35.386  1+0 records out
00:03:35.386  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423735 s, 247 MB/s
00:03:35.386   16:52:58  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:35.386   16:52:58  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:35.386   16:52:58  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1
00:03:35.386   16:52:58  -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt
00:03:35.386   16:52:58  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1
00:03:35.386  No valid GPT data, bailing
00:03:35.386    16:52:58  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1
00:03:35.386   16:52:58  -- scripts/common.sh@394 -- # pt=
00:03:35.386   16:52:58  -- scripts/common.sh@395 -- # return 1
00:03:35.386   16:52:58  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1
00:03:35.386  1+0 records in
00:03:35.386  1+0 records out
00:03:35.386  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445209 s, 236 MB/s
00:03:35.386   16:52:58  -- spdk/autotest.sh@105 -- # sync
00:03:35.643   16:52:58  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:03:35.643   16:52:58  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:03:35.643    16:52:58  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:03:37.542    16:53:00  -- spdk/autotest.sh@111 -- # uname -s
00:03:37.542   16:53:00  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:03:37.542   16:53:00  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:03:37.542   16:53:00  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:03:37.542  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:38.107  Hugepages
00:03:38.107  node     hugesize     free /  total
00:03:38.107  node0   1048576kB        0 /      0
00:03:38.107  node0      2048kB        0 /      0
00:03:38.107  
00:03:38.107  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:38.107  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:03:38.107  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:03:38.107  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:03:38.107  NVMe                      0000:00:12.0    1b36   0010   unknown nvme             nvme2      nvme2n1 nvme2n2 nvme2n3
00:03:38.364  NVMe                      0000:00:13.0    1b36   0010   unknown nvme             nvme3      nvme3n1
00:03:38.364    16:53:01  -- spdk/autotest.sh@117 -- # uname -s
00:03:38.364   16:53:01  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:03:38.364   16:53:01  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:03:38.364   16:53:01  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:38.622  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:39.188  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:03:39.188  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:03:39.188  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:03:39.188  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:03:39.445   16:53:02  -- common/autotest_common.sh@1517 -- # sleep 1
00:03:40.379   16:53:03  -- common/autotest_common.sh@1518 -- # bdfs=()
00:03:40.379   16:53:03  -- common/autotest_common.sh@1518 -- # local bdfs
00:03:40.379   16:53:03  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:03:40.379    16:53:03  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:03:40.379    16:53:03  -- common/autotest_common.sh@1498 -- # bdfs=()
00:03:40.379    16:53:03  -- common/autotest_common.sh@1498 -- # local bdfs
00:03:40.379    16:53:03  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:03:40.379     16:53:03  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:03:40.379     16:53:03  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:03:40.379    16:53:03  -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:03:40.379    16:53:03  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:03:40.379   16:53:03  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:40.636  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:40.894  Waiting for block devices as requested
00:03:40.894  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:03:40.894  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:03:40.894  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:03:41.152  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:03:46.414  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:03:46.414   16:53:09  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:03:46.414    16:53:09  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:03:46.414    16:53:09  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:03:46.414    16:53:09  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]]
00:03:46.414     16:53:09  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:03:46.414    16:53:09  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1
00:03:46.414   16:53:09  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1
00:03:46.414   16:53:09  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]]
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # grep oacs
00:03:46.414   16:53:09  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:03:46.414   16:53:09  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:03:46.414   16:53:09  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1
00:03:46.414   16:53:09  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:03:46.414   16:53:09  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:03:46.414   16:53:09  -- common/autotest_common.sh@1543 -- # continue
00:03:46.414   16:53:09  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:03:46.414    16:53:09  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme
00:03:46.414    16:53:09  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:03:46.414    16:53:09  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]]
00:03:46.414     16:53:09  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:03:46.414    16:53:09  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:03:46.414   16:53:09  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:03:46.414   16:53:09  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # grep oacs
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:03:46.414   16:53:09  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:03:46.414   16:53:09  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:03:46.414   16:53:09  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:03:46.414   16:53:09  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:03:46.414   16:53:09  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:03:46.414   16:53:09  -- common/autotest_common.sh@1543 -- # continue
00:03:46.414   16:53:09  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:03:46.414    16:53:09  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:03:46.414    16:53:09  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]]
00:03:46.414     16:53:09  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2
00:03:46.414   16:53:09  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2
00:03:46.414   16:53:09  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]]
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # grep oacs
00:03:46.414    16:53:09  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:03:46.414   16:53:09  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:03:46.414   16:53:09  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:03:46.414   16:53:09  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2
00:03:46.414    16:53:09  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:03:46.414   16:53:09  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:03:46.414   16:53:09  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:03:46.414   16:53:09  -- common/autotest_common.sh@1543 -- # continue
00:03:46.414   16:53:09  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:03:46.414    16:53:09  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:03:46.414     16:53:09  -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme
00:03:46.414    16:53:09  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3
00:03:46.414    16:53:09  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]]
00:03:46.415     16:53:09  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3
00:03:46.415    16:53:09  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3
00:03:46.415   16:53:09  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3
00:03:46.415   16:53:09  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]]
00:03:46.415    16:53:09  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:03:46.415    16:53:09  -- common/autotest_common.sh@1531 -- # grep oacs
00:03:46.415    16:53:09  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3
00:03:46.415   16:53:09  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:03:46.415   16:53:09  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:03:46.415   16:53:09  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:03:46.415    16:53:09  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3
00:03:46.415    16:53:09  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:03:46.415    16:53:09  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:03:46.415   16:53:09  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:03:46.415   16:53:09  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:03:46.415   16:53:09  -- common/autotest_common.sh@1543 -- # continue
00:03:46.415   16:53:09  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:03:46.415   16:53:09  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:46.415   16:53:09  -- common/autotest_common.sh@10 -- # set +x
00:03:46.415   16:53:09  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:03:46.415   16:53:09  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:46.415   16:53:09  -- common/autotest_common.sh@10 -- # set +x
00:03:46.415   16:53:09  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:46.672  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:47.240  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:03:47.240  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:03:47.240  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:03:47.240  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:03:47.240   16:53:10  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:03:47.240   16:53:10  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:47.240   16:53:10  -- common/autotest_common.sh@10 -- # set +x
00:03:47.240   16:53:10  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:03:47.240   16:53:10  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:03:47.240    16:53:10  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:03:47.240    16:53:10  -- common/autotest_common.sh@1563 -- # bdfs=()
00:03:47.240    16:53:10  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:03:47.240    16:53:10  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:03:47.240    16:53:10  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:03:47.240     16:53:10  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:03:47.240     16:53:10  -- common/autotest_common.sh@1498 -- # bdfs=()
00:03:47.240     16:53:10  -- common/autotest_common.sh@1498 -- # local bdfs
00:03:47.240     16:53:10  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:03:47.240      16:53:10  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:03:47.240      16:53:10  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:03:47.498     16:53:10  -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:03:47.498     16:53:10  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:03:47.498    16:53:10  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:47.498     16:53:10  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:03:47.498    16:53:10  -- common/autotest_common.sh@1566 -- # device=0x0010
00:03:47.498    16:53:10  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:03:47.499    16:53:10  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:47.499     16:53:10  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device
00:03:47.499    16:53:10  -- common/autotest_common.sh@1566 -- # device=0x0010
00:03:47.499    16:53:10  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:03:47.499    16:53:10  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:47.499     16:53:10  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device
00:03:47.499    16:53:10  -- common/autotest_common.sh@1566 -- # device=0x0010
00:03:47.499    16:53:10  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:03:47.499    16:53:10  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:47.499     16:53:10  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device
00:03:47.499    16:53:10  -- common/autotest_common.sh@1566 -- # device=0x0010
00:03:47.499    16:53:10  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:03:47.499    16:53:10  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:03:47.499    16:53:10  -- common/autotest_common.sh@1572 -- # return 0
00:03:47.499   16:53:10  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:03:47.499   16:53:10  -- common/autotest_common.sh@1580 -- # return 0
00:03:47.499   16:53:10  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:03:47.499   16:53:10  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:03:47.499   16:53:10  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:03:47.499   16:53:10  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:03:47.499   16:53:10  -- spdk/autotest.sh@149 -- # timing_enter lib
00:03:47.499   16:53:10  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:47.499   16:53:10  -- common/autotest_common.sh@10 -- # set +x
00:03:47.499   16:53:10  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:03:47.499   16:53:10  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:03:47.499   16:53:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:47.499   16:53:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:47.499   16:53:10  -- common/autotest_common.sh@10 -- # set +x
00:03:47.499  ************************************
00:03:47.499  START TEST env
00:03:47.499  ************************************
00:03:47.499   16:53:10 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:03:47.499  * Looking for test storage...
00:03:47.499  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:47.499     16:53:10 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:47.499     16:53:10 env -- common/autotest_common.sh@1711 -- # lcov --version
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:47.499    16:53:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:47.499    16:53:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:47.499    16:53:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:47.499    16:53:10 env -- scripts/common.sh@336 -- # IFS=.-:
00:03:47.499    16:53:10 env -- scripts/common.sh@336 -- # read -ra ver1
00:03:47.499    16:53:10 env -- scripts/common.sh@337 -- # IFS=.-:
00:03:47.499    16:53:10 env -- scripts/common.sh@337 -- # read -ra ver2
00:03:47.499    16:53:10 env -- scripts/common.sh@338 -- # local 'op=<'
00:03:47.499    16:53:10 env -- scripts/common.sh@340 -- # ver1_l=2
00:03:47.499    16:53:10 env -- scripts/common.sh@341 -- # ver2_l=1
00:03:47.499    16:53:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:47.499    16:53:10 env -- scripts/common.sh@344 -- # case "$op" in
00:03:47.499    16:53:10 env -- scripts/common.sh@345 -- # : 1
00:03:47.499    16:53:10 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:47.499    16:53:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:47.499     16:53:10 env -- scripts/common.sh@365 -- # decimal 1
00:03:47.499     16:53:10 env -- scripts/common.sh@353 -- # local d=1
00:03:47.499     16:53:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:47.499     16:53:10 env -- scripts/common.sh@355 -- # echo 1
00:03:47.499    16:53:10 env -- scripts/common.sh@365 -- # ver1[v]=1
00:03:47.499     16:53:10 env -- scripts/common.sh@366 -- # decimal 2
00:03:47.499     16:53:10 env -- scripts/common.sh@353 -- # local d=2
00:03:47.499     16:53:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:47.499     16:53:10 env -- scripts/common.sh@355 -- # echo 2
00:03:47.499    16:53:10 env -- scripts/common.sh@366 -- # ver2[v]=2
00:03:47.499    16:53:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:47.499    16:53:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:47.499    16:53:10 env -- scripts/common.sh@368 -- # return 0
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:47.499  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:47.499  		--rc genhtml_branch_coverage=1
00:03:47.499  		--rc genhtml_function_coverage=1
00:03:47.499  		--rc genhtml_legend=1
00:03:47.499  		--rc geninfo_all_blocks=1
00:03:47.499  		--rc geninfo_unexecuted_blocks=1
00:03:47.499  		
00:03:47.499  		'
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:47.499  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:47.499  		--rc genhtml_branch_coverage=1
00:03:47.499  		--rc genhtml_function_coverage=1
00:03:47.499  		--rc genhtml_legend=1
00:03:47.499  		--rc geninfo_all_blocks=1
00:03:47.499  		--rc geninfo_unexecuted_blocks=1
00:03:47.499  		
00:03:47.499  		'
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:47.499  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:47.499  		--rc genhtml_branch_coverage=1
00:03:47.499  		--rc genhtml_function_coverage=1
00:03:47.499  		--rc genhtml_legend=1
00:03:47.499  		--rc geninfo_all_blocks=1
00:03:47.499  		--rc geninfo_unexecuted_blocks=1
00:03:47.499  		
00:03:47.499  		'
00:03:47.499    16:53:10 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:47.499  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:47.499  		--rc genhtml_branch_coverage=1
00:03:47.499  		--rc genhtml_function_coverage=1
00:03:47.499  		--rc genhtml_legend=1
00:03:47.499  		--rc geninfo_all_blocks=1
00:03:47.499  		--rc geninfo_unexecuted_blocks=1
00:03:47.499  		
00:03:47.499  		'
00:03:47.499   16:53:10 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:03:47.499   16:53:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:47.499   16:53:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:47.499   16:53:10 env -- common/autotest_common.sh@10 -- # set +x
00:03:47.499  ************************************
00:03:47.499  START TEST env_memory
00:03:47.499  ************************************
00:03:47.499   16:53:10 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:03:47.499  
00:03:47.499  
00:03:47.499       CUnit - A unit testing framework for C - Version 2.1-3
00:03:47.499       http://cunit.sourceforge.net/
00:03:47.499  
00:03:47.499  
00:03:47.499  Suite: memory
00:03:47.499    Test: alloc and free memory map ...[2024-12-09 16:53:10.535159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:03:47.758  passed
00:03:47.758    Test: mem map translation ...[2024-12-09 16:53:10.611409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:03:47.758  [2024-12-09 16:53:10.611469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:03:47.758  [2024-12-09 16:53:10.611533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:03:47.758  [2024-12-09 16:53:10.611549] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:03:47.758  passed
00:03:47.758    Test: mem map registration ...[2024-12-09 16:53:10.680286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:03:47.758  [2024-12-09 16:53:10.680351] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:03:47.758  passed
00:03:47.758    Test: mem map adjacent registrations ...passed
00:03:47.758  
00:03:47.758  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:47.758                suites      1      1    n/a      0        0
00:03:47.758                 tests      4      4      4      0        0
00:03:47.758               asserts    152    152    152      0      n/a
00:03:47.758  
00:03:47.758  Elapsed time =    0.294 seconds
00:03:47.758  
00:03:47.758  real	0m0.330s
00:03:47.758  user	0m0.303s
00:03:47.758  sys	0m0.019s
00:03:47.758   16:53:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:47.758  ************************************
00:03:47.758  END TEST env_memory
00:03:47.758  ************************************
00:03:47.758   16:53:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:03:48.016   16:53:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:03:48.016   16:53:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:48.016   16:53:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:48.016   16:53:10 env -- common/autotest_common.sh@10 -- # set +x
00:03:48.016  ************************************
00:03:48.016  START TEST env_vtophys
00:03:48.016  ************************************
00:03:48.016   16:53:10 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:03:48.016  EAL: lib.eal log level changed from notice to debug
00:03:48.016  EAL: Detected lcore 0 as core 0 on socket 0
00:03:48.016  EAL: Detected lcore 1 as core 0 on socket 0
00:03:48.016  EAL: Detected lcore 2 as core 0 on socket 0
00:03:48.016  EAL: Detected lcore 3 as core 0 on socket 0
00:03:48.017  EAL: Detected lcore 4 as core 0 on socket 0
00:03:48.017  EAL: Detected lcore 5 as core 0 on socket 0
00:03:48.017  EAL: Detected lcore 6 as core 0 on socket 0
00:03:48.017  EAL: Detected lcore 7 as core 0 on socket 0
00:03:48.017  EAL: Detected lcore 8 as core 0 on socket 0
00:03:48.017  EAL: Detected lcore 9 as core 0 on socket 0
00:03:48.017  EAL: Maximum logical cores by configuration: 128
00:03:48.017  EAL: Detected CPU lcores: 10
00:03:48.017  EAL: Detected NUMA nodes: 1
00:03:48.017  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:03:48.017  EAL: Detected shared linkage of DPDK
00:03:48.017  EAL: No shared files mode enabled, IPC will be disabled
00:03:48.017  EAL: Selected IOVA mode 'PA'
00:03:48.017  EAL: Probing VFIO support...
00:03:48.017  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:03:48.017  EAL: VFIO modules not loaded, skipping VFIO support...
00:03:48.017  EAL: Ask a virtual area of 0x2e000 bytes
00:03:48.017  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:03:48.017  EAL: Setting up physically contiguous memory...
00:03:48.017  EAL: Setting maximum number of open files to 524288
00:03:48.017  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:03:48.017  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:03:48.017  EAL: Ask a virtual area of 0x61000 bytes
00:03:48.017  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:03:48.017  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:48.017  EAL: Ask a virtual area of 0x400000000 bytes
00:03:48.017  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:03:48.017  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:03:48.017  EAL: Ask a virtual area of 0x61000 bytes
00:03:48.017  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:03:48.017  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:48.017  EAL: Ask a virtual area of 0x400000000 bytes
00:03:48.017  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:03:48.017  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:03:48.017  EAL: Ask a virtual area of 0x61000 bytes
00:03:48.017  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:03:48.017  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:48.017  EAL: Ask a virtual area of 0x400000000 bytes
00:03:48.017  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:03:48.017  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:03:48.017  EAL: Ask a virtual area of 0x61000 bytes
00:03:48.017  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:03:48.017  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:48.017  EAL: Ask a virtual area of 0x400000000 bytes
00:03:48.017  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:03:48.017  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:03:48.017  EAL: Hugepages will be freed exactly as allocated.
00:03:48.017  EAL: No shared files mode enabled, IPC is disabled
00:03:48.017  EAL: No shared files mode enabled, IPC is disabled
00:03:48.017  EAL: TSC frequency is ~2600000 KHz
00:03:48.017  EAL: Main lcore 0 is ready (tid=7fc45323da40;cpuset=[0])
00:03:48.017  EAL: Trying to obtain current memory policy.
00:03:48.017  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.017  EAL: Restoring previous memory policy: 0
00:03:48.017  EAL: request: mp_malloc_sync
00:03:48.017  EAL: No shared files mode enabled, IPC is disabled
00:03:48.017  EAL: Heap on socket 0 was expanded by 2MB
00:03:48.017  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:03:48.017  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:03:48.017  EAL: Mem event callback 'spdk:(nil)' registered
00:03:48.017  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:03:48.017  
00:03:48.017  
00:03:48.017       CUnit - A unit testing framework for C - Version 2.1-3
00:03:48.017       http://cunit.sourceforge.net/
00:03:48.017  
00:03:48.017  
00:03:48.017  Suite: components_suite
00:03:48.582    Test: vtophys_malloc_test ...passed
00:03:48.582    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:03:48.582  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.582  EAL: Restoring previous memory policy: 4
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was expanded by 4MB
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was shrunk by 4MB
00:03:48.582  EAL: Trying to obtain current memory policy.
00:03:48.582  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.582  EAL: Restoring previous memory policy: 4
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was expanded by 6MB
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was shrunk by 6MB
00:03:48.582  EAL: Trying to obtain current memory policy.
00:03:48.582  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.582  EAL: Restoring previous memory policy: 4
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was expanded by 10MB
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was shrunk by 10MB
00:03:48.582  EAL: Trying to obtain current memory policy.
00:03:48.582  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.582  EAL: Restoring previous memory policy: 4
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was expanded by 18MB
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was shrunk by 18MB
00:03:48.582  EAL: Trying to obtain current memory policy.
00:03:48.582  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.582  EAL: Restoring previous memory policy: 4
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was expanded by 34MB
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was shrunk by 34MB
00:03:48.582  EAL: Trying to obtain current memory policy.
00:03:48.582  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.582  EAL: Restoring previous memory policy: 4
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was expanded by 66MB
00:03:48.582  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.582  EAL: request: mp_malloc_sync
00:03:48.582  EAL: No shared files mode enabled, IPC is disabled
00:03:48.582  EAL: Heap on socket 0 was shrunk by 66MB
00:03:48.840  EAL: Trying to obtain current memory policy.
00:03:48.840  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:48.840  EAL: Restoring previous memory policy: 4
00:03:48.840  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.840  EAL: request: mp_malloc_sync
00:03:48.840  EAL: No shared files mode enabled, IPC is disabled
00:03:48.840  EAL: Heap on socket 0 was expanded by 130MB
00:03:48.840  EAL: Calling mem event callback 'spdk:(nil)'
00:03:48.840  EAL: request: mp_malloc_sync
00:03:48.840  EAL: No shared files mode enabled, IPC is disabled
00:03:48.840  EAL: Heap on socket 0 was shrunk by 130MB
00:03:49.099  EAL: Trying to obtain current memory policy.
00:03:49.099  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:49.099  EAL: Restoring previous memory policy: 4
00:03:49.099  EAL: Calling mem event callback 'spdk:(nil)'
00:03:49.099  EAL: request: mp_malloc_sync
00:03:49.099  EAL: No shared files mode enabled, IPC is disabled
00:03:49.099  EAL: Heap on socket 0 was expanded by 258MB
00:03:49.357  EAL: Calling mem event callback 'spdk:(nil)'
00:03:49.357  EAL: request: mp_malloc_sync
00:03:49.357  EAL: No shared files mode enabled, IPC is disabled
00:03:49.357  EAL: Heap on socket 0 was shrunk by 258MB
00:03:49.614  EAL: Trying to obtain current memory policy.
00:03:49.614  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:49.614  EAL: Restoring previous memory policy: 4
00:03:49.614  EAL: Calling mem event callback 'spdk:(nil)'
00:03:49.614  EAL: request: mp_malloc_sync
00:03:49.614  EAL: No shared files mode enabled, IPC is disabled
00:03:49.614  EAL: Heap on socket 0 was expanded by 514MB
00:03:50.180  EAL: Calling mem event callback 'spdk:(nil)'
00:03:50.180  EAL: request: mp_malloc_sync
00:03:50.180  EAL: No shared files mode enabled, IPC is disabled
00:03:50.180  EAL: Heap on socket 0 was shrunk by 514MB
00:03:50.744  EAL: Trying to obtain current memory policy.
00:03:50.744  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:51.002  EAL: Restoring previous memory policy: 4
00:03:51.002  EAL: Calling mem event callback 'spdk:(nil)'
00:03:51.002  EAL: request: mp_malloc_sync
00:03:51.002  EAL: No shared files mode enabled, IPC is disabled
00:03:51.002  EAL: Heap on socket 0 was expanded by 1026MB
00:03:51.937  EAL: Calling mem event callback 'spdk:(nil)'
00:03:51.937  EAL: request: mp_malloc_sync
00:03:51.937  EAL: No shared files mode enabled, IPC is disabled
00:03:51.937  EAL: Heap on socket 0 was shrunk by 1026MB
00:03:52.874  passed
00:03:52.874  
00:03:52.874  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.874                suites      1      1    n/a      0        0
00:03:52.874                 tests      2      2      2      0        0
00:03:52.874               asserts   5873   5873   5873      0      n/a
00:03:52.874  
00:03:52.874  Elapsed time =    4.586 seconds
00:03:52.874  EAL: Calling mem event callback 'spdk:(nil)'
00:03:52.874  EAL: request: mp_malloc_sync
00:03:52.874  EAL: No shared files mode enabled, IPC is disabled
00:03:52.874  EAL: Heap on socket 0 was shrunk by 2MB
00:03:52.874  EAL: No shared files mode enabled, IPC is disabled
00:03:52.874  EAL: No shared files mode enabled, IPC is disabled
00:03:52.874  EAL: No shared files mode enabled, IPC is disabled
00:03:52.874  
00:03:52.874  real	0m4.854s
00:03:52.874  user	0m4.060s
00:03:52.874  sys	0m0.645s
00:03:52.874   16:53:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.874   16:53:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:03:52.874  ************************************
00:03:52.874  END TEST env_vtophys
00:03:52.874  ************************************
00:03:52.874   16:53:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:03:52.874   16:53:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.874   16:53:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.874   16:53:15 env -- common/autotest_common.sh@10 -- # set +x
00:03:52.874  ************************************
00:03:52.874  START TEST env_pci
00:03:52.874  ************************************
00:03:52.874   16:53:15 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:03:52.874  
00:03:52.874  
00:03:52.874       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.874       http://cunit.sourceforge.net/
00:03:52.874  
00:03:52.874  
00:03:52.874  Suite: pci
00:03:52.874    Test: pci_hook ...[2024-12-09 16:53:15.752889] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58256 has claimed it
00:03:52.874  passed
00:03:52.874  
00:03:52.874  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.874                suites      1      1    n/a      0        0
00:03:52.874                 tests      1      1      1      0        0
00:03:52.874               asserts     25     25     25      0      n/a
00:03:52.874  
00:03:52.874  Elapsed time =    0.008 secondsEAL: Cannot find device (10000:00:01.0)
00:03:52.874  EAL: Failed to attach device on primary process
00:03:52.874  
00:03:52.874  
00:03:52.874  real	0m0.069s
00:03:52.874  user	0m0.031s
00:03:52.874  sys	0m0.038s
00:03:52.874   16:53:15 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.874   16:53:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:03:52.874  ************************************
00:03:52.874  END TEST env_pci
00:03:52.874  ************************************
00:03:52.874   16:53:15 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:03:52.874    16:53:15 env -- env/env.sh@15 -- # uname
00:03:52.874   16:53:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:03:52.874   16:53:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:03:52.874   16:53:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:03:52.874   16:53:15 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:03:52.874   16:53:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.874   16:53:15 env -- common/autotest_common.sh@10 -- # set +x
00:03:52.874  ************************************
00:03:52.874  START TEST env_dpdk_post_init
00:03:52.874  ************************************
00:03:52.874   16:53:15 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:03:52.874  EAL: Detected CPU lcores: 10
00:03:52.874  EAL: Detected NUMA nodes: 1
00:03:52.874  EAL: Detected shared linkage of DPDK
00:03:52.874  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:03:52.874  EAL: Selected IOVA mode 'PA'
00:03:53.132  TELEMETRY: No legacy callbacks, legacy socket not created
00:03:53.132  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:03:53.132  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1)
00:03:53.132  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1)
00:03:53.132  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1)
00:03:53.132  Starting DPDK initialization...
00:03:53.132  Starting SPDK post initialization...
00:03:53.132  SPDK NVMe probe
00:03:53.132  Attaching to 0000:00:10.0
00:03:53.132  Attaching to 0000:00:11.0
00:03:53.132  Attaching to 0000:00:12.0
00:03:53.132  Attaching to 0000:00:13.0
00:03:53.132  Attached to 0000:00:10.0
00:03:53.132  Attached to 0000:00:11.0
00:03:53.132  Attached to 0000:00:13.0
00:03:53.132  Attached to 0000:00:12.0
00:03:53.132  Cleaning up...
00:03:53.132  
00:03:53.132  real	0m0.242s
00:03:53.132  user	0m0.078s
00:03:53.132  sys	0m0.066s
00:03:53.132   16:53:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:53.132   16:53:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:03:53.132  ************************************
00:03:53.132  END TEST env_dpdk_post_init
00:03:53.132  ************************************
00:03:53.132    16:53:16 env -- env/env.sh@26 -- # uname
00:03:53.132   16:53:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:03:53.132   16:53:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:03:53.132   16:53:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:53.132   16:53:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:53.132   16:53:16 env -- common/autotest_common.sh@10 -- # set +x
00:03:53.132  ************************************
00:03:53.132  START TEST env_mem_callbacks
00:03:53.132  ************************************
00:03:53.132   16:53:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:03:53.132  EAL: Detected CPU lcores: 10
00:03:53.132  EAL: Detected NUMA nodes: 1
00:03:53.132  EAL: Detected shared linkage of DPDK
00:03:53.390  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:03:53.390  EAL: Selected IOVA mode 'PA'
00:03:53.390  TELEMETRY: No legacy callbacks, legacy socket not created
00:03:53.390  
00:03:53.390  
00:03:53.390       CUnit - A unit testing framework for C - Version 2.1-3
00:03:53.390       http://cunit.sourceforge.net/
00:03:53.390  
00:03:53.390  
00:03:53.390  Suite: memory
00:03:53.390    Test: test ...
00:03:53.390  register 0x200000200000 2097152
00:03:53.390  malloc 3145728
00:03:53.390  register 0x200000400000 4194304
00:03:53.390  buf 0x2000004fffc0 len 3145728 PASSED
00:03:53.390  malloc 64
00:03:53.390  buf 0x2000004ffec0 len 64 PASSED
00:03:53.390  malloc 4194304
00:03:53.390  register 0x200000800000 6291456
00:03:53.390  buf 0x2000009fffc0 len 4194304 PASSED
00:03:53.390  free 0x2000004fffc0 3145728
00:03:53.390  free 0x2000004ffec0 64
00:03:53.390  unregister 0x200000400000 4194304 PASSED
00:03:53.390  free 0x2000009fffc0 4194304
00:03:53.390  unregister 0x200000800000 6291456 PASSED
00:03:53.390  malloc 8388608
00:03:53.390  register 0x200000400000 10485760
00:03:53.390  buf 0x2000005fffc0 len 8388608 PASSED
00:03:53.390  free 0x2000005fffc0 8388608
00:03:53.390  unregister 0x200000400000 10485760 PASSED
00:03:53.390  passed
00:03:53.390  
00:03:53.390  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:53.390                suites      1      1    n/a      0        0
00:03:53.390                 tests      1      1      1      0        0
00:03:53.390               asserts     15     15     15      0      n/a
00:03:53.390  
00:03:53.390  Elapsed time =    0.039 seconds
00:03:53.390  
00:03:53.390  real	0m0.210s
00:03:53.390  user	0m0.056s
00:03:53.390  sys	0m0.053s
00:03:53.390   16:53:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:53.390  ************************************
00:03:53.390  END TEST env_mem_callbacks
00:03:53.390  ************************************
00:03:53.390   16:53:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:03:53.390  
00:03:53.390  real	0m6.051s
00:03:53.390  user	0m4.682s
00:03:53.390  sys	0m1.012s
00:03:53.390   16:53:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:53.390   16:53:16 env -- common/autotest_common.sh@10 -- # set +x
00:03:53.390  ************************************
00:03:53.390  END TEST env
00:03:53.390  ************************************
00:03:53.390   16:53:16  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:03:53.390   16:53:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:53.390   16:53:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:53.390   16:53:16  -- common/autotest_common.sh@10 -- # set +x
00:03:53.390  ************************************
00:03:53.390  START TEST rpc
00:03:53.390  ************************************
00:03:53.390   16:53:16 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:03:53.648  * Looking for test storage...
00:03:53.648  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:53.648     16:53:16 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:03:53.648     16:53:16 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:53.648    16:53:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:53.648    16:53:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:53.648    16:53:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:53.648    16:53:16 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:03:53.648    16:53:16 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:03:53.648    16:53:16 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:03:53.648    16:53:16 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:03:53.648    16:53:16 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:03:53.648    16:53:16 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:03:53.648    16:53:16 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:03:53.648    16:53:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:53.648    16:53:16 rpc -- scripts/common.sh@344 -- # case "$op" in
00:03:53.648    16:53:16 rpc -- scripts/common.sh@345 -- # : 1
00:03:53.648    16:53:16 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:53.648    16:53:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:53.648     16:53:16 rpc -- scripts/common.sh@365 -- # decimal 1
00:03:53.648     16:53:16 rpc -- scripts/common.sh@353 -- # local d=1
00:03:53.648     16:53:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:53.648     16:53:16 rpc -- scripts/common.sh@355 -- # echo 1
00:03:53.648    16:53:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:03:53.648     16:53:16 rpc -- scripts/common.sh@366 -- # decimal 2
00:03:53.648     16:53:16 rpc -- scripts/common.sh@353 -- # local d=2
00:03:53.648     16:53:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:53.648     16:53:16 rpc -- scripts/common.sh@355 -- # echo 2
00:03:53.648    16:53:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:03:53.648    16:53:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:53.648    16:53:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:53.648    16:53:16 rpc -- scripts/common.sh@368 -- # return 0
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:53.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.648  		--rc genhtml_branch_coverage=1
00:03:53.648  		--rc genhtml_function_coverage=1
00:03:53.648  		--rc genhtml_legend=1
00:03:53.648  		--rc geninfo_all_blocks=1
00:03:53.648  		--rc geninfo_unexecuted_blocks=1
00:03:53.648  		
00:03:53.648  		'
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:53.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.648  		--rc genhtml_branch_coverage=1
00:03:53.648  		--rc genhtml_function_coverage=1
00:03:53.648  		--rc genhtml_legend=1
00:03:53.648  		--rc geninfo_all_blocks=1
00:03:53.648  		--rc geninfo_unexecuted_blocks=1
00:03:53.648  		
00:03:53.648  		'
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:53.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.648  		--rc genhtml_branch_coverage=1
00:03:53.648  		--rc genhtml_function_coverage=1
00:03:53.648  		--rc genhtml_legend=1
00:03:53.648  		--rc geninfo_all_blocks=1
00:03:53.648  		--rc geninfo_unexecuted_blocks=1
00:03:53.648  		
00:03:53.648  		'
00:03:53.648    16:53:16 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:53.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:53.648  		--rc genhtml_branch_coverage=1
00:03:53.648  		--rc genhtml_function_coverage=1
00:03:53.648  		--rc genhtml_legend=1
00:03:53.648  		--rc geninfo_all_blocks=1
00:03:53.648  		--rc geninfo_unexecuted_blocks=1
00:03:53.648  		
00:03:53.648  		'
00:03:53.648   16:53:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58383
00:03:53.648   16:53:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:53.648   16:53:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58383
00:03:53.648   16:53:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:03:53.648   16:53:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 58383 ']'
00:03:53.648   16:53:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:53.648   16:53:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:53.649  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:53.649   16:53:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:53.649   16:53:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:53.649   16:53:16 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:53.649  [2024-12-09 16:53:16.620992] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:03:53.649  [2024-12-09 16:53:16.621126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58383 ]
00:03:53.906  [2024-12-09 16:53:16.780060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:53.906  [2024-12-09 16:53:16.877957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:03:53.906  [2024-12-09 16:53:16.878005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58383' to capture a snapshot of events at runtime.
00:03:53.906  [2024-12-09 16:53:16.878015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:03:53.906  [2024-12-09 16:53:16.878025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:03:53.906  [2024-12-09 16:53:16.878032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58383 for offline analysis/debug.
00:03:53.906  [2024-12-09 16:53:16.878880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:03:54.476   16:53:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:54.476   16:53:17 rpc -- common/autotest_common.sh@868 -- # return 0
00:03:54.476   16:53:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:03:54.476   16:53:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:03:54.476   16:53:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:03:54.476   16:53:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:03:54.476   16:53:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:54.476   16:53:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:54.476   16:53:17 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:54.476  ************************************
00:03:54.476  START TEST rpc_integrity
00:03:54.476  ************************************
00:03:54.476   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:03:54.476    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:03:54.476    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.476    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.476    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.476   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:03:54.476    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:03:54.737   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:03:54.737    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:03:54.737    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.737    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.737    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.737   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:03:54.737    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:03:54.737    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.737    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.737    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.737   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:03:54.737  {
00:03:54.737  "name": "Malloc0",
00:03:54.737  "aliases": [
00:03:54.737  "db1fabd3-c45c-4492-9110-54ddb4c0ce2f"
00:03:54.737  ],
00:03:54.737  "product_name": "Malloc disk",
00:03:54.737  "block_size": 512,
00:03:54.737  "num_blocks": 16384,
00:03:54.737  "uuid": "db1fabd3-c45c-4492-9110-54ddb4c0ce2f",
00:03:54.737  "assigned_rate_limits": {
00:03:54.737  "rw_ios_per_sec": 0,
00:03:54.737  "rw_mbytes_per_sec": 0,
00:03:54.737  "r_mbytes_per_sec": 0,
00:03:54.737  "w_mbytes_per_sec": 0
00:03:54.737  },
00:03:54.737  "claimed": false,
00:03:54.737  "zoned": false,
00:03:54.737  "supported_io_types": {
00:03:54.737  "read": true,
00:03:54.737  "write": true,
00:03:54.737  "unmap": true,
00:03:54.737  "flush": true,
00:03:54.737  "reset": true,
00:03:54.737  "nvme_admin": false,
00:03:54.737  "nvme_io": false,
00:03:54.737  "nvme_io_md": false,
00:03:54.737  "write_zeroes": true,
00:03:54.737  "zcopy": true,
00:03:54.737  "get_zone_info": false,
00:03:54.737  "zone_management": false,
00:03:54.737  "zone_append": false,
00:03:54.737  "compare": false,
00:03:54.737  "compare_and_write": false,
00:03:54.737  "abort": true,
00:03:54.737  "seek_hole": false,
00:03:54.737  "seek_data": false,
00:03:54.737  "copy": true,
00:03:54.737  "nvme_iov_md": false
00:03:54.737  },
00:03:54.737  "memory_domains": [
00:03:54.737  {
00:03:54.737  "dma_device_id": "system",
00:03:54.737  "dma_device_type": 1
00:03:54.737  },
00:03:54.737  {
00:03:54.737  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:54.737  "dma_device_type": 2
00:03:54.737  }
00:03:54.737  ],
00:03:54.737  "driver_specific": {}
00:03:54.738  }
00:03:54.738  ]'
00:03:54.738    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.738  [2024-12-09 16:53:17.587002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:03:54.738  [2024-12-09 16:53:17.587056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:03:54.738  [2024-12-09 16:53:17.587081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:03:54.738  [2024-12-09 16:53:17.587093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:03:54.738  [2024-12-09 16:53:17.589269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:03:54.738  [2024-12-09 16:53:17.589309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:03:54.738  Passthru0
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.738    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:03:54.738    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.738    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.738    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:03:54.738  {
00:03:54.738  "name": "Malloc0",
00:03:54.738  "aliases": [
00:03:54.738  "db1fabd3-c45c-4492-9110-54ddb4c0ce2f"
00:03:54.738  ],
00:03:54.738  "product_name": "Malloc disk",
00:03:54.738  "block_size": 512,
00:03:54.738  "num_blocks": 16384,
00:03:54.738  "uuid": "db1fabd3-c45c-4492-9110-54ddb4c0ce2f",
00:03:54.738  "assigned_rate_limits": {
00:03:54.738  "rw_ios_per_sec": 0,
00:03:54.738  "rw_mbytes_per_sec": 0,
00:03:54.738  "r_mbytes_per_sec": 0,
00:03:54.738  "w_mbytes_per_sec": 0
00:03:54.738  },
00:03:54.738  "claimed": true,
00:03:54.738  "claim_type": "exclusive_write",
00:03:54.738  "zoned": false,
00:03:54.738  "supported_io_types": {
00:03:54.738  "read": true,
00:03:54.738  "write": true,
00:03:54.738  "unmap": true,
00:03:54.738  "flush": true,
00:03:54.738  "reset": true,
00:03:54.738  "nvme_admin": false,
00:03:54.738  "nvme_io": false,
00:03:54.738  "nvme_io_md": false,
00:03:54.738  "write_zeroes": true,
00:03:54.738  "zcopy": true,
00:03:54.738  "get_zone_info": false,
00:03:54.738  "zone_management": false,
00:03:54.738  "zone_append": false,
00:03:54.738  "compare": false,
00:03:54.738  "compare_and_write": false,
00:03:54.738  "abort": true,
00:03:54.738  "seek_hole": false,
00:03:54.738  "seek_data": false,
00:03:54.738  "copy": true,
00:03:54.738  "nvme_iov_md": false
00:03:54.738  },
00:03:54.738  "memory_domains": [
00:03:54.738  {
00:03:54.738  "dma_device_id": "system",
00:03:54.738  "dma_device_type": 1
00:03:54.738  },
00:03:54.738  {
00:03:54.738  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:54.738  "dma_device_type": 2
00:03:54.738  }
00:03:54.738  ],
00:03:54.738  "driver_specific": {}
00:03:54.738  },
00:03:54.738  {
00:03:54.738  "name": "Passthru0",
00:03:54.738  "aliases": [
00:03:54.738  "2e4a6c43-71b0-5378-9dcd-aec6c17db893"
00:03:54.738  ],
00:03:54.738  "product_name": "passthru",
00:03:54.738  "block_size": 512,
00:03:54.738  "num_blocks": 16384,
00:03:54.738  "uuid": "2e4a6c43-71b0-5378-9dcd-aec6c17db893",
00:03:54.738  "assigned_rate_limits": {
00:03:54.738  "rw_ios_per_sec": 0,
00:03:54.738  "rw_mbytes_per_sec": 0,
00:03:54.738  "r_mbytes_per_sec": 0,
00:03:54.738  "w_mbytes_per_sec": 0
00:03:54.738  },
00:03:54.738  "claimed": false,
00:03:54.738  "zoned": false,
00:03:54.738  "supported_io_types": {
00:03:54.738  "read": true,
00:03:54.738  "write": true,
00:03:54.738  "unmap": true,
00:03:54.738  "flush": true,
00:03:54.738  "reset": true,
00:03:54.738  "nvme_admin": false,
00:03:54.738  "nvme_io": false,
00:03:54.738  "nvme_io_md": false,
00:03:54.738  "write_zeroes": true,
00:03:54.738  "zcopy": true,
00:03:54.738  "get_zone_info": false,
00:03:54.738  "zone_management": false,
00:03:54.738  "zone_append": false,
00:03:54.738  "compare": false,
00:03:54.738  "compare_and_write": false,
00:03:54.738  "abort": true,
00:03:54.738  "seek_hole": false,
00:03:54.738  "seek_data": false,
00:03:54.738  "copy": true,
00:03:54.738  "nvme_iov_md": false
00:03:54.738  },
00:03:54.738  "memory_domains": [
00:03:54.738  {
00:03:54.738  "dma_device_id": "system",
00:03:54.738  "dma_device_type": 1
00:03:54.738  },
00:03:54.738  {
00:03:54.738  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:54.738  "dma_device_type": 2
00:03:54.738  }
00:03:54.738  ],
00:03:54.738  "driver_specific": {
00:03:54.738  "passthru": {
00:03:54.738  "name": "Passthru0",
00:03:54.738  "base_bdev_name": "Malloc0"
00:03:54.738  }
00:03:54.738  }
00:03:54.738  }
00:03:54.738  ]'
00:03:54.738    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.738    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:03:54.738    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.738    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.738    16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:03:54.738    16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:03:54.738   16:53:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:03:54.738  
00:03:54.738  real	0m0.237s
00:03:54.738  user	0m0.120s
00:03:54.738  sys	0m0.031s
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:54.738   16:53:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:54.738  ************************************
00:03:54.738  END TEST rpc_integrity
00:03:54.738  ************************************
00:03:54.738   16:53:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:03:54.738   16:53:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:54.738   16:53:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:54.738   16:53:17 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:54.738  ************************************
00:03:54.738  START TEST rpc_plugins
00:03:54.738  ************************************
00:03:54.738   16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:03:54.738    16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:03:54.738    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.738    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.999   16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:03:54.999    16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.999   16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:03:54.999  {
00:03:54.999  "name": "Malloc1",
00:03:54.999  "aliases": [
00:03:54.999  "b7c2c9b1-b321-4438-8547-7e7f3eaab59a"
00:03:54.999  ],
00:03:54.999  "product_name": "Malloc disk",
00:03:54.999  "block_size": 4096,
00:03:54.999  "num_blocks": 256,
00:03:54.999  "uuid": "b7c2c9b1-b321-4438-8547-7e7f3eaab59a",
00:03:54.999  "assigned_rate_limits": {
00:03:54.999  "rw_ios_per_sec": 0,
00:03:54.999  "rw_mbytes_per_sec": 0,
00:03:54.999  "r_mbytes_per_sec": 0,
00:03:54.999  "w_mbytes_per_sec": 0
00:03:54.999  },
00:03:54.999  "claimed": false,
00:03:54.999  "zoned": false,
00:03:54.999  "supported_io_types": {
00:03:54.999  "read": true,
00:03:54.999  "write": true,
00:03:54.999  "unmap": true,
00:03:54.999  "flush": true,
00:03:54.999  "reset": true,
00:03:54.999  "nvme_admin": false,
00:03:54.999  "nvme_io": false,
00:03:54.999  "nvme_io_md": false,
00:03:54.999  "write_zeroes": true,
00:03:54.999  "zcopy": true,
00:03:54.999  "get_zone_info": false,
00:03:54.999  "zone_management": false,
00:03:54.999  "zone_append": false,
00:03:54.999  "compare": false,
00:03:54.999  "compare_and_write": false,
00:03:54.999  "abort": true,
00:03:54.999  "seek_hole": false,
00:03:54.999  "seek_data": false,
00:03:54.999  "copy": true,
00:03:54.999  "nvme_iov_md": false
00:03:54.999  },
00:03:54.999  "memory_domains": [
00:03:54.999  {
00:03:54.999  "dma_device_id": "system",
00:03:54.999  "dma_device_type": 1
00:03:54.999  },
00:03:54.999  {
00:03:54.999  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:54.999  "dma_device_type": 2
00:03:54.999  }
00:03:54.999  ],
00:03:54.999  "driver_specific": {}
00:03:54.999  }
00:03:54.999  ]'
00:03:54.999    16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:03:54.999   16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:03:54.999   16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:03:54.999   16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.999   16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:54.999   16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.999    16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:54.999    16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:54.999   16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:03:54.999    16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:03:54.999   16:53:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:03:54.999  
00:03:54.999  real	0m0.116s
00:03:55.000  user	0m0.069s
00:03:55.000  sys	0m0.012s
00:03:55.000   16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:55.000   16:53:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:55.000  ************************************
00:03:55.000  END TEST rpc_plugins
00:03:55.000  ************************************
00:03:55.000   16:53:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:03:55.000   16:53:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:55.000   16:53:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:55.000   16:53:17 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:55.000  ************************************
00:03:55.000  START TEST rpc_trace_cmd_test
00:03:55.000  ************************************
00:03:55.000   16:53:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:03:55.000   16:53:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:03:55.000    16:53:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:03:55.000    16:53:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.000    16:53:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:03:55.000    16:53:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.000   16:53:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:03:55.000  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58383",
00:03:55.000  "tpoint_group_mask": "0x8",
00:03:55.000  "iscsi_conn": {
00:03:55.000  "mask": "0x2",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "scsi": {
00:03:55.000  "mask": "0x4",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "bdev": {
00:03:55.000  "mask": "0x8",
00:03:55.000  "tpoint_mask": "0xffffffffffffffff"
00:03:55.000  },
00:03:55.000  "nvmf_rdma": {
00:03:55.000  "mask": "0x10",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "nvmf_tcp": {
00:03:55.000  "mask": "0x20",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "ftl": {
00:03:55.000  "mask": "0x40",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "blobfs": {
00:03:55.000  "mask": "0x80",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "dsa": {
00:03:55.000  "mask": "0x200",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "thread": {
00:03:55.000  "mask": "0x400",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "nvme_pcie": {
00:03:55.000  "mask": "0x800",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "iaa": {
00:03:55.000  "mask": "0x1000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "nvme_tcp": {
00:03:55.000  "mask": "0x2000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "bdev_nvme": {
00:03:55.000  "mask": "0x4000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "sock": {
00:03:55.000  "mask": "0x8000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "blob": {
00:03:55.000  "mask": "0x10000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "bdev_raid": {
00:03:55.000  "mask": "0x20000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  },
00:03:55.000  "scheduler": {
00:03:55.000  "mask": "0x40000",
00:03:55.000  "tpoint_mask": "0x0"
00:03:55.000  }
00:03:55.000  }'
00:03:55.000    16:53:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:03:55.000   16:53:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:03:55.000    16:53:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:03:55.000   16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:03:55.000    16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:03:55.000   16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:03:55.000    16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:03:55.257   16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:03:55.257    16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:03:55.257   16:53:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:03:55.257  
00:03:55.257  real	0m0.182s
00:03:55.257  user	0m0.145s
00:03:55.257  sys	0m0.027s
00:03:55.257   16:53:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:55.257   16:53:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:03:55.257  ************************************
00:03:55.257  END TEST rpc_trace_cmd_test
00:03:55.257  ************************************
00:03:55.257   16:53:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:03:55.257   16:53:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:03:55.257   16:53:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:03:55.257   16:53:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:55.257   16:53:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:55.257   16:53:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:55.257  ************************************
00:03:55.257  START TEST rpc_daemon_integrity
00:03:55.257  ************************************
00:03:55.257   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.257   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:03:55.257   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.257   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.257    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.257   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:03:55.257  {
00:03:55.257  "name": "Malloc2",
00:03:55.257  "aliases": [
00:03:55.257  "eda12c2e-ca3d-486d-bd35-8de30840c877"
00:03:55.257  ],
00:03:55.257  "product_name": "Malloc disk",
00:03:55.257  "block_size": 512,
00:03:55.257  "num_blocks": 16384,
00:03:55.257  "uuid": "eda12c2e-ca3d-486d-bd35-8de30840c877",
00:03:55.257  "assigned_rate_limits": {
00:03:55.257  "rw_ios_per_sec": 0,
00:03:55.257  "rw_mbytes_per_sec": 0,
00:03:55.257  "r_mbytes_per_sec": 0,
00:03:55.257  "w_mbytes_per_sec": 0
00:03:55.257  },
00:03:55.257  "claimed": false,
00:03:55.257  "zoned": false,
00:03:55.257  "supported_io_types": {
00:03:55.257  "read": true,
00:03:55.257  "write": true,
00:03:55.257  "unmap": true,
00:03:55.257  "flush": true,
00:03:55.257  "reset": true,
00:03:55.257  "nvme_admin": false,
00:03:55.257  "nvme_io": false,
00:03:55.257  "nvme_io_md": false,
00:03:55.257  "write_zeroes": true,
00:03:55.258  "zcopy": true,
00:03:55.258  "get_zone_info": false,
00:03:55.258  "zone_management": false,
00:03:55.258  "zone_append": false,
00:03:55.258  "compare": false,
00:03:55.258  "compare_and_write": false,
00:03:55.258  "abort": true,
00:03:55.258  "seek_hole": false,
00:03:55.258  "seek_data": false,
00:03:55.258  "copy": true,
00:03:55.258  "nvme_iov_md": false
00:03:55.258  },
00:03:55.258  "memory_domains": [
00:03:55.258  {
00:03:55.258  "dma_device_id": "system",
00:03:55.258  "dma_device_type": 1
00:03:55.258  },
00:03:55.258  {
00:03:55.258  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:55.258  "dma_device_type": 2
00:03:55.258  }
00:03:55.258  ],
00:03:55.258  "driver_specific": {}
00:03:55.258  }
00:03:55.258  ]'
00:03:55.258    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:03:55.258   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:03:55.258   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:03:55.258   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.258   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.258  [2024-12-09 16:53:18.269829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:03:55.258  [2024-12-09 16:53:18.269900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:03:55.258  [2024-12-09 16:53:18.269920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:03:55.258  [2024-12-09 16:53:18.269931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:03:55.258  [2024-12-09 16:53:18.272091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:03:55.258  [2024-12-09 16:53:18.272129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:03:55.258  Passthru0
00:03:55.258   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.258    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:03:55.258    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.258    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.258    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.258   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:03:55.258  {
00:03:55.258  "name": "Malloc2",
00:03:55.258  "aliases": [
00:03:55.258  "eda12c2e-ca3d-486d-bd35-8de30840c877"
00:03:55.258  ],
00:03:55.258  "product_name": "Malloc disk",
00:03:55.258  "block_size": 512,
00:03:55.258  "num_blocks": 16384,
00:03:55.258  "uuid": "eda12c2e-ca3d-486d-bd35-8de30840c877",
00:03:55.258  "assigned_rate_limits": {
00:03:55.258  "rw_ios_per_sec": 0,
00:03:55.258  "rw_mbytes_per_sec": 0,
00:03:55.258  "r_mbytes_per_sec": 0,
00:03:55.258  "w_mbytes_per_sec": 0
00:03:55.258  },
00:03:55.258  "claimed": true,
00:03:55.258  "claim_type": "exclusive_write",
00:03:55.258  "zoned": false,
00:03:55.258  "supported_io_types": {
00:03:55.258  "read": true,
00:03:55.258  "write": true,
00:03:55.258  "unmap": true,
00:03:55.258  "flush": true,
00:03:55.258  "reset": true,
00:03:55.258  "nvme_admin": false,
00:03:55.258  "nvme_io": false,
00:03:55.258  "nvme_io_md": false,
00:03:55.258  "write_zeroes": true,
00:03:55.258  "zcopy": true,
00:03:55.258  "get_zone_info": false,
00:03:55.258  "zone_management": false,
00:03:55.258  "zone_append": false,
00:03:55.258  "compare": false,
00:03:55.258  "compare_and_write": false,
00:03:55.258  "abort": true,
00:03:55.258  "seek_hole": false,
00:03:55.258  "seek_data": false,
00:03:55.258  "copy": true,
00:03:55.258  "nvme_iov_md": false
00:03:55.258  },
00:03:55.258  "memory_domains": [
00:03:55.258  {
00:03:55.258  "dma_device_id": "system",
00:03:55.258  "dma_device_type": 1
00:03:55.258  },
00:03:55.258  {
00:03:55.258  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:55.258  "dma_device_type": 2
00:03:55.258  }
00:03:55.258  ],
00:03:55.258  "driver_specific": {}
00:03:55.258  },
00:03:55.258  {
00:03:55.258  "name": "Passthru0",
00:03:55.258  "aliases": [
00:03:55.258  "b2186a60-95fd-5623-acb7-9b07bce42e2e"
00:03:55.258  ],
00:03:55.258  "product_name": "passthru",
00:03:55.258  "block_size": 512,
00:03:55.258  "num_blocks": 16384,
00:03:55.258  "uuid": "b2186a60-95fd-5623-acb7-9b07bce42e2e",
00:03:55.258  "assigned_rate_limits": {
00:03:55.258  "rw_ios_per_sec": 0,
00:03:55.258  "rw_mbytes_per_sec": 0,
00:03:55.258  "r_mbytes_per_sec": 0,
00:03:55.258  "w_mbytes_per_sec": 0
00:03:55.258  },
00:03:55.258  "claimed": false,
00:03:55.258  "zoned": false,
00:03:55.258  "supported_io_types": {
00:03:55.258  "read": true,
00:03:55.258  "write": true,
00:03:55.258  "unmap": true,
00:03:55.258  "flush": true,
00:03:55.258  "reset": true,
00:03:55.258  "nvme_admin": false,
00:03:55.258  "nvme_io": false,
00:03:55.258  "nvme_io_md": false,
00:03:55.258  "write_zeroes": true,
00:03:55.258  "zcopy": true,
00:03:55.258  "get_zone_info": false,
00:03:55.258  "zone_management": false,
00:03:55.258  "zone_append": false,
00:03:55.258  "compare": false,
00:03:55.258  "compare_and_write": false,
00:03:55.258  "abort": true,
00:03:55.258  "seek_hole": false,
00:03:55.258  "seek_data": false,
00:03:55.258  "copy": true,
00:03:55.258  "nvme_iov_md": false
00:03:55.258  },
00:03:55.258  "memory_domains": [
00:03:55.258  {
00:03:55.258  "dma_device_id": "system",
00:03:55.258  "dma_device_type": 1
00:03:55.258  },
00:03:55.258  {
00:03:55.258  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:55.258  "dma_device_type": 2
00:03:55.258  }
00:03:55.258  ],
00:03:55.258  "driver_specific": {
00:03:55.258  "passthru": {
00:03:55.258  "name": "Passthru0",
00:03:55.258  "base_bdev_name": "Malloc2"
00:03:55.258  }
00:03:55.258  }
00:03:55.258  }
00:03:55.258  ]'
00:03:55.258    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.535    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:03:55.535    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:55.535    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.535    16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:03:55.535    16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:03:55.535  
00:03:55.535  real	0m0.227s
00:03:55.535  user	0m0.122s
00:03:55.535  sys	0m0.029s
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:55.535  ************************************
00:03:55.535  END TEST rpc_daemon_integrity
00:03:55.535  ************************************
00:03:55.535   16:53:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:55.535   16:53:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:03:55.535   16:53:18 rpc -- rpc/rpc.sh@84 -- # killprocess 58383
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 58383 ']'
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@958 -- # kill -0 58383
00:03:55.535    16:53:18 rpc -- common/autotest_common.sh@959 -- # uname
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:55.535    16:53:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58383
00:03:55.535  killing process with pid 58383
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58383'
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@973 -- # kill 58383
00:03:55.535   16:53:18 rpc -- common/autotest_common.sh@978 -- # wait 58383
00:03:56.909  
00:03:56.909  real	0m3.400s
00:03:56.909  user	0m3.842s
00:03:56.909  sys	0m0.580s
00:03:56.909   16:53:19 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:56.909  ************************************
00:03:56.909   16:53:19 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:56.909  END TEST rpc
00:03:56.909  ************************************
00:03:56.909   16:53:19  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:03:56.909   16:53:19  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:56.909   16:53:19  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:56.909   16:53:19  -- common/autotest_common.sh@10 -- # set +x
00:03:56.909  ************************************
00:03:56.909  START TEST skip_rpc
00:03:56.909  ************************************
00:03:56.909   16:53:19 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:03:56.909  * Looking for test storage...
00:03:56.909  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:03:56.909    16:53:19 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:56.909     16:53:19 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:56.909     16:53:19 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:03:57.168    16:53:19 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@345 -- # : 1
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:57.168     16:53:19 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:57.168    16:53:19 skip_rpc -- scripts/common.sh@368 -- # return 0
00:03:57.168    16:53:19 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:57.168    16:53:19 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:57.168  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.168  		--rc genhtml_branch_coverage=1
00:03:57.168  		--rc genhtml_function_coverage=1
00:03:57.168  		--rc genhtml_legend=1
00:03:57.168  		--rc geninfo_all_blocks=1
00:03:57.168  		--rc geninfo_unexecuted_blocks=1
00:03:57.168  		
00:03:57.168  		'
00:03:57.168    16:53:19 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:57.168  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.168  		--rc genhtml_branch_coverage=1
00:03:57.168  		--rc genhtml_function_coverage=1
00:03:57.168  		--rc genhtml_legend=1
00:03:57.168  		--rc geninfo_all_blocks=1
00:03:57.168  		--rc geninfo_unexecuted_blocks=1
00:03:57.168  		
00:03:57.168  		'
00:03:57.168    16:53:19 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:57.168  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.168  		--rc genhtml_branch_coverage=1
00:03:57.168  		--rc genhtml_function_coverage=1
00:03:57.168  		--rc genhtml_legend=1
00:03:57.168  		--rc geninfo_all_blocks=1
00:03:57.168  		--rc geninfo_unexecuted_blocks=1
00:03:57.168  		
00:03:57.168  		'
00:03:57.168    16:53:19 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:57.168  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:57.168  		--rc genhtml_branch_coverage=1
00:03:57.168  		--rc genhtml_function_coverage=1
00:03:57.168  		--rc genhtml_legend=1
00:03:57.168  		--rc geninfo_all_blocks=1
00:03:57.168  		--rc geninfo_unexecuted_blocks=1
00:03:57.168  		
00:03:57.168  		'
00:03:57.168   16:53:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:03:57.168   16:53:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:03:57.168   16:53:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:03:57.168   16:53:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:57.168   16:53:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:57.168   16:53:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:57.168  ************************************
00:03:57.168  START TEST skip_rpc
00:03:57.168  ************************************
00:03:57.168   16:53:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:03:57.168   16:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58595
00:03:57.168   16:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:57.168   16:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:03:57.168   16:53:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:03:57.168  [2024-12-09 16:53:20.084588] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:03:57.168  [2024-12-09 16:53:20.084709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ]
00:03:57.429  [2024-12-09 16:53:20.246680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:57.429  [2024-12-09 16:53:20.343915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:02.692    16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58595
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58595 ']'
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58595
00:04:02.692    16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:02.692    16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58595
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:02.692  killing process with pid 58595
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58595'
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58595
00:04:02.692   16:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58595
00:04:03.259  
00:04:03.259  real	0m6.223s
00:04:03.259  user	0m5.839s
00:04:03.260  sys	0m0.281s
00:04:03.260   16:53:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:03.260  ************************************
00:04:03.260  END TEST skip_rpc
00:04:03.260  ************************************
00:04:03.260   16:53:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:03.260   16:53:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:04:03.260   16:53:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:03.260   16:53:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:03.260   16:53:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:03.260  ************************************
00:04:03.260  START TEST skip_rpc_with_json
00:04:03.260  ************************************
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:04:03.260  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58688
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58688
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58688 ']'
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:03.260   16:53:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:03.518  [2024-12-09 16:53:26.346417] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:03.518  [2024-12-09 16:53:26.346540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58688 ]
00:04:03.518  [2024-12-09 16:53:26.503368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:03.776  [2024-12-09 16:53:26.591308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:04.343  [2024-12-09 16:53:27.195172] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:04:04.343  request:
00:04:04.343  {
00:04:04.343  "trtype": "tcp",
00:04:04.343  "method": "nvmf_get_transports",
00:04:04.343  "req_id": 1
00:04:04.343  }
00:04:04.343  Got JSON-RPC error response
00:04:04.343  response:
00:04:04.343  {
00:04:04.343  "code": -19,
00:04:04.343  "message": "No such device"
00:04:04.343  }
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:04.343  [2024-12-09 16:53:27.203270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:04.343   16:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:04.343  {
00:04:04.343  "subsystems": [
00:04:04.343  {
00:04:04.343  "subsystem": "fsdev",
00:04:04.343  "config": [
00:04:04.343  {
00:04:04.343  "method": "fsdev_set_opts",
00:04:04.343  "params": {
00:04:04.343  "fsdev_io_pool_size": 65535,
00:04:04.343  "fsdev_io_cache_size": 256
00:04:04.343  }
00:04:04.343  }
00:04:04.343  ]
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "subsystem": "keyring",
00:04:04.343  "config": []
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "subsystem": "iobuf",
00:04:04.343  "config": [
00:04:04.343  {
00:04:04.343  "method": "iobuf_set_options",
00:04:04.343  "params": {
00:04:04.343  "small_pool_count": 8192,
00:04:04.343  "large_pool_count": 1024,
00:04:04.343  "small_bufsize": 8192,
00:04:04.343  "large_bufsize": 135168,
00:04:04.343  "enable_numa": false
00:04:04.343  }
00:04:04.343  }
00:04:04.343  ]
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "subsystem": "sock",
00:04:04.343  "config": [
00:04:04.343  {
00:04:04.343  "method": "sock_set_default_impl",
00:04:04.343  "params": {
00:04:04.343  "impl_name": "posix"
00:04:04.343  }
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "method": "sock_impl_set_options",
00:04:04.343  "params": {
00:04:04.343  "impl_name": "ssl",
00:04:04.343  "recv_buf_size": 4096,
00:04:04.343  "send_buf_size": 4096,
00:04:04.343  "enable_recv_pipe": true,
00:04:04.343  "enable_quickack": false,
00:04:04.343  "enable_placement_id": 0,
00:04:04.343  "enable_zerocopy_send_server": true,
00:04:04.343  "enable_zerocopy_send_client": false,
00:04:04.343  "zerocopy_threshold": 0,
00:04:04.343  "tls_version": 0,
00:04:04.343  "enable_ktls": false
00:04:04.343  }
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "method": "sock_impl_set_options",
00:04:04.343  "params": {
00:04:04.343  "impl_name": "posix",
00:04:04.343  "recv_buf_size": 2097152,
00:04:04.343  "send_buf_size": 2097152,
00:04:04.343  "enable_recv_pipe": true,
00:04:04.343  "enable_quickack": false,
00:04:04.343  "enable_placement_id": 0,
00:04:04.343  "enable_zerocopy_send_server": true,
00:04:04.343  "enable_zerocopy_send_client": false,
00:04:04.343  "zerocopy_threshold": 0,
00:04:04.343  "tls_version": 0,
00:04:04.343  "enable_ktls": false
00:04:04.343  }
00:04:04.343  }
00:04:04.343  ]
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "subsystem": "vmd",
00:04:04.343  "config": []
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "subsystem": "accel",
00:04:04.343  "config": [
00:04:04.343  {
00:04:04.343  "method": "accel_set_options",
00:04:04.343  "params": {
00:04:04.343  "small_cache_size": 128,
00:04:04.343  "large_cache_size": 16,
00:04:04.343  "task_count": 2048,
00:04:04.343  "sequence_count": 2048,
00:04:04.343  "buf_count": 2048
00:04:04.343  }
00:04:04.343  }
00:04:04.343  ]
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "subsystem": "bdev",
00:04:04.343  "config": [
00:04:04.343  {
00:04:04.343  "method": "bdev_set_options",
00:04:04.343  "params": {
00:04:04.343  "bdev_io_pool_size": 65535,
00:04:04.343  "bdev_io_cache_size": 256,
00:04:04.343  "bdev_auto_examine": true,
00:04:04.343  "iobuf_small_cache_size": 128,
00:04:04.343  "iobuf_large_cache_size": 16
00:04:04.343  }
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "method": "bdev_raid_set_options",
00:04:04.343  "params": {
00:04:04.343  "process_window_size_kb": 1024,
00:04:04.343  "process_max_bandwidth_mb_sec": 0
00:04:04.343  }
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "method": "bdev_iscsi_set_options",
00:04:04.343  "params": {
00:04:04.343  "timeout_sec": 30
00:04:04.343  }
00:04:04.343  },
00:04:04.343  {
00:04:04.343  "method": "bdev_nvme_set_options",
00:04:04.343  "params": {
00:04:04.343  "action_on_timeout": "none",
00:04:04.343  "timeout_us": 0,
00:04:04.343  "timeout_admin_us": 0,
00:04:04.343  "keep_alive_timeout_ms": 10000,
00:04:04.343  "arbitration_burst": 0,
00:04:04.343  "low_priority_weight": 0,
00:04:04.343  "medium_priority_weight": 0,
00:04:04.343  "high_priority_weight": 0,
00:04:04.343  "nvme_adminq_poll_period_us": 10000,
00:04:04.343  "nvme_ioq_poll_period_us": 0,
00:04:04.343  "io_queue_requests": 0,
00:04:04.343  "delay_cmd_submit": true,
00:04:04.343  "transport_retry_count": 4,
00:04:04.343  "bdev_retry_count": 3,
00:04:04.343  "transport_ack_timeout": 0,
00:04:04.343  "ctrlr_loss_timeout_sec": 0,
00:04:04.343  "reconnect_delay_sec": 0,
00:04:04.344  "fast_io_fail_timeout_sec": 0,
00:04:04.344  "disable_auto_failback": false,
00:04:04.344  "generate_uuids": false,
00:04:04.344  "transport_tos": 0,
00:04:04.344  "nvme_error_stat": false,
00:04:04.344  "rdma_srq_size": 0,
00:04:04.344  "io_path_stat": false,
00:04:04.344  "allow_accel_sequence": false,
00:04:04.344  "rdma_max_cq_size": 0,
00:04:04.344  "rdma_cm_event_timeout_ms": 0,
00:04:04.344  "dhchap_digests": [
00:04:04.344  "sha256",
00:04:04.344  "sha384",
00:04:04.344  "sha512"
00:04:04.344  ],
00:04:04.344  "dhchap_dhgroups": [
00:04:04.344  "null",
00:04:04.344  "ffdhe2048",
00:04:04.344  "ffdhe3072",
00:04:04.344  "ffdhe4096",
00:04:04.344  "ffdhe6144",
00:04:04.344  "ffdhe8192"
00:04:04.344  ]
00:04:04.344  }
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "method": "bdev_nvme_set_hotplug",
00:04:04.344  "params": {
00:04:04.344  "period_us": 100000,
00:04:04.344  "enable": false
00:04:04.344  }
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "method": "bdev_wait_for_examine"
00:04:04.344  }
00:04:04.344  ]
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "scsi",
00:04:04.344  "config": null
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "scheduler",
00:04:04.344  "config": [
00:04:04.344  {
00:04:04.344  "method": "framework_set_scheduler",
00:04:04.344  "params": {
00:04:04.344  "name": "static"
00:04:04.344  }
00:04:04.344  }
00:04:04.344  ]
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "vhost_scsi",
00:04:04.344  "config": []
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "vhost_blk",
00:04:04.344  "config": []
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "ublk",
00:04:04.344  "config": []
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "nbd",
00:04:04.344  "config": []
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "nvmf",
00:04:04.344  "config": [
00:04:04.344  {
00:04:04.344  "method": "nvmf_set_config",
00:04:04.344  "params": {
00:04:04.344  "discovery_filter": "match_any",
00:04:04.344  "admin_cmd_passthru": {
00:04:04.344  "identify_ctrlr": false
00:04:04.344  },
00:04:04.344  "dhchap_digests": [
00:04:04.344  "sha256",
00:04:04.344  "sha384",
00:04:04.344  "sha512"
00:04:04.344  ],
00:04:04.344  "dhchap_dhgroups": [
00:04:04.344  "null",
00:04:04.344  "ffdhe2048",
00:04:04.344  "ffdhe3072",
00:04:04.344  "ffdhe4096",
00:04:04.344  "ffdhe6144",
00:04:04.344  "ffdhe8192"
00:04:04.344  ]
00:04:04.344  }
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "method": "nvmf_set_max_subsystems",
00:04:04.344  "params": {
00:04:04.344  "max_subsystems": 1024
00:04:04.344  }
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "method": "nvmf_set_crdt",
00:04:04.344  "params": {
00:04:04.344  "crdt1": 0,
00:04:04.344  "crdt2": 0,
00:04:04.344  "crdt3": 0
00:04:04.344  }
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "method": "nvmf_create_transport",
00:04:04.344  "params": {
00:04:04.344  "trtype": "TCP",
00:04:04.344  "max_queue_depth": 128,
00:04:04.344  "max_io_qpairs_per_ctrlr": 127,
00:04:04.344  "in_capsule_data_size": 4096,
00:04:04.344  "max_io_size": 131072,
00:04:04.344  "io_unit_size": 131072,
00:04:04.344  "max_aq_depth": 128,
00:04:04.344  "num_shared_buffers": 511,
00:04:04.344  "buf_cache_size": 4294967295,
00:04:04.344  "dif_insert_or_strip": false,
00:04:04.344  "zcopy": false,
00:04:04.344  "c2h_success": true,
00:04:04.344  "sock_priority": 0,
00:04:04.344  "abort_timeout_sec": 1,
00:04:04.344  "ack_timeout": 0,
00:04:04.344  "data_wr_pool_size": 0
00:04:04.344  }
00:04:04.344  }
00:04:04.344  ]
00:04:04.344  },
00:04:04.344  {
00:04:04.344  "subsystem": "iscsi",
00:04:04.344  "config": [
00:04:04.344  {
00:04:04.344  "method": "iscsi_set_options",
00:04:04.344  "params": {
00:04:04.344  "node_base": "iqn.2016-06.io.spdk",
00:04:04.344  "max_sessions": 128,
00:04:04.344  "max_connections_per_session": 2,
00:04:04.344  "max_queue_depth": 64,
00:04:04.344  "default_time2wait": 2,
00:04:04.344  "default_time2retain": 20,
00:04:04.344  "first_burst_length": 8192,
00:04:04.344  "immediate_data": true,
00:04:04.344  "allow_duplicated_isid": false,
00:04:04.344  "error_recovery_level": 0,
00:04:04.344  "nop_timeout": 60,
00:04:04.344  "nop_in_interval": 30,
00:04:04.344  "disable_chap": false,
00:04:04.344  "require_chap": false,
00:04:04.344  "mutual_chap": false,
00:04:04.344  "chap_group": 0,
00:04:04.344  "max_large_datain_per_connection": 64,
00:04:04.344  "max_r2t_per_connection": 4,
00:04:04.344  "pdu_pool_size": 36864,
00:04:04.344  "immediate_data_pool_size": 16384,
00:04:04.344  "data_out_pool_size": 2048
00:04:04.344  }
00:04:04.344  }
00:04:04.344  ]
00:04:04.344  }
00:04:04.344  ]
00:04:04.344  }
00:04:04.344   16:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:04:04.344   16:53:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58688
00:04:04.344   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58688 ']'
00:04:04.344   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58688
00:04:04.344    16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:04.344   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:04.344    16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58688
00:04:04.602   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:04.602  killing process with pid 58688
00:04:04.603   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:04.603   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58688'
00:04:04.603   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58688
00:04:04.603   16:53:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58688
00:04:06.016   16:53:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:06.016   16:53:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58722
00:04:06.016   16:53:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58722
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58722 ']'
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58722
00:04:11.279    16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:11.279    16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58722
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:11.279  killing process with pid 58722
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58722'
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58722
00:04:11.279   16:53:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58722
00:04:11.845   16:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:11.845   16:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:11.845  
00:04:11.845  real	0m8.584s
00:04:11.845  user	0m8.221s
00:04:11.845  sys	0m0.596s
00:04:11.845  ************************************
00:04:11.845  END TEST skip_rpc_with_json
00:04:11.845  ************************************
00:04:11.845   16:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:11.845   16:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:12.103   16:53:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:04:12.103   16:53:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:12.103   16:53:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:12.103   16:53:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:12.103  ************************************
00:04:12.103  START TEST skip_rpc_with_delay
00:04:12.103  ************************************
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:12.103    16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:12.103    16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:04:12.103   16:53:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:12.103  [2024-12-09 16:53:34.972895] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:04:12.103   16:53:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:04:12.103   16:53:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:12.103   16:53:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:12.103   16:53:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:12.103  
00:04:12.103  real	0m0.124s
00:04:12.103  user	0m0.066s
00:04:12.103  sys	0m0.057s
00:04:12.103   16:53:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:12.103  ************************************
00:04:12.103  END TEST skip_rpc_with_delay
00:04:12.103  ************************************
00:04:12.103   16:53:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:04:12.103    16:53:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:04:12.103   16:53:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:04:12.103   16:53:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:04:12.103   16:53:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:12.103   16:53:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:12.103   16:53:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:12.103  ************************************
00:04:12.103  START TEST exit_on_failed_rpc_init
00:04:12.103  ************************************
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58845
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58845
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58845 ']'
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:12.103  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:12.103   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:12.362  [2024-12-09 16:53:35.142973] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:12.362  [2024-12-09 16:53:35.143085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58845 ]
00:04:12.362  [2024-12-09 16:53:35.299222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:12.362  [2024-12-09 16:53:35.383984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:13.295   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:13.296    16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:13.296   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:13.296    16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:13.296   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:13.296   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:13.296   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:04:13.296   16:53:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:04:13.296  [2024-12-09 16:53:36.062129] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:13.296  [2024-12-09 16:53:36.062269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ]
00:04:13.296  [2024-12-09 16:53:36.221358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:13.296  [2024-12-09 16:53:36.320124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:13.296  [2024-12-09 16:53:36.320209] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:04:13.296  [2024-12-09 16:53:36.320223] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:04:13.296  [2024-12-09 16:53:36.320236] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58845
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58845 ']'
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58845
00:04:13.554    16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:13.554    16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58845
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:13.554  killing process with pid 58845
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58845'
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58845
00:04:13.554   16:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58845
00:04:14.926  
00:04:14.926  real	0m2.662s
00:04:14.926  user	0m2.969s
00:04:14.926  sys	0m0.420s
00:04:14.926   16:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:14.926   16:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:04:14.926  ************************************
00:04:14.926  END TEST exit_on_failed_rpc_init
00:04:14.926  ************************************
00:04:14.926   16:53:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:14.926  
00:04:14.926  real	0m17.913s
00:04:14.926  user	0m17.237s
00:04:14.926  sys	0m1.524s
00:04:14.926   16:53:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:14.926   16:53:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:14.926  ************************************
00:04:14.926  END TEST skip_rpc
00:04:14.926  ************************************
00:04:14.926   16:53:37  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:14.926   16:53:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:14.926   16:53:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:14.926   16:53:37  -- common/autotest_common.sh@10 -- # set +x
00:04:14.926  ************************************
00:04:14.926  START TEST rpc_client
00:04:14.926  ************************************
00:04:14.926   16:53:37 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:14.926  * Looking for test storage...
00:04:14.926  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:14.926     16:53:37 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:04:14.926     16:53:37 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@345 -- # : 1
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@353 -- # local d=1
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@355 -- # echo 1
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@353 -- # local d=2
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:14.926     16:53:37 rpc_client -- scripts/common.sh@355 -- # echo 2
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:14.926    16:53:37 rpc_client -- scripts/common.sh@368 -- # return 0
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:14.926  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.926  		--rc genhtml_branch_coverage=1
00:04:14.926  		--rc genhtml_function_coverage=1
00:04:14.926  		--rc genhtml_legend=1
00:04:14.926  		--rc geninfo_all_blocks=1
00:04:14.926  		--rc geninfo_unexecuted_blocks=1
00:04:14.926  		
00:04:14.926  		'
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:14.926  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.926  		--rc genhtml_branch_coverage=1
00:04:14.926  		--rc genhtml_function_coverage=1
00:04:14.926  		--rc genhtml_legend=1
00:04:14.926  		--rc geninfo_all_blocks=1
00:04:14.926  		--rc geninfo_unexecuted_blocks=1
00:04:14.926  		
00:04:14.926  		'
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:14.926  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.926  		--rc genhtml_branch_coverage=1
00:04:14.926  		--rc genhtml_function_coverage=1
00:04:14.926  		--rc genhtml_legend=1
00:04:14.926  		--rc geninfo_all_blocks=1
00:04:14.926  		--rc geninfo_unexecuted_blocks=1
00:04:14.926  		
00:04:14.926  		'
00:04:14.926    16:53:37 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:14.926  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:14.926  		--rc genhtml_branch_coverage=1
00:04:14.926  		--rc genhtml_function_coverage=1
00:04:14.926  		--rc genhtml_legend=1
00:04:14.926  		--rc geninfo_all_blocks=1
00:04:14.926  		--rc geninfo_unexecuted_blocks=1
00:04:14.926  		
00:04:14.926  		'
00:04:14.926   16:53:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:04:15.183  OK
00:04:15.183   16:53:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:04:15.183  
00:04:15.183  real	0m0.189s
00:04:15.183  user	0m0.101s
00:04:15.183  sys	0m0.095s
00:04:15.183   16:53:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:15.183   16:53:37 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:04:15.183  ************************************
00:04:15.183  END TEST rpc_client
00:04:15.183  ************************************
00:04:15.183   16:53:38  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:15.183   16:53:38  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:15.183   16:53:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:15.183   16:53:38  -- common/autotest_common.sh@10 -- # set +x
00:04:15.183  ************************************
00:04:15.183  START TEST json_config
00:04:15.183  ************************************
00:04:15.183   16:53:38 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:15.183    16:53:38 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:15.183     16:53:38 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:15.183     16:53:38 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:04:15.183    16:53:38 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:15.183    16:53:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:15.183    16:53:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:15.183    16:53:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:15.183    16:53:38 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:04:15.183    16:53:38 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:04:15.183    16:53:38 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:04:15.183    16:53:38 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:04:15.183    16:53:38 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:04:15.183    16:53:38 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:04:15.183    16:53:38 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:04:15.183    16:53:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:15.183    16:53:38 json_config -- scripts/common.sh@344 -- # case "$op" in
00:04:15.183    16:53:38 json_config -- scripts/common.sh@345 -- # : 1
00:04:15.183    16:53:38 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:15.183    16:53:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:15.183     16:53:38 json_config -- scripts/common.sh@365 -- # decimal 1
00:04:15.183     16:53:38 json_config -- scripts/common.sh@353 -- # local d=1
00:04:15.183     16:53:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:15.183     16:53:38 json_config -- scripts/common.sh@355 -- # echo 1
00:04:15.183    16:53:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:04:15.183     16:53:38 json_config -- scripts/common.sh@366 -- # decimal 2
00:04:15.183     16:53:38 json_config -- scripts/common.sh@353 -- # local d=2
00:04:15.183     16:53:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:15.183     16:53:38 json_config -- scripts/common.sh@355 -- # echo 2
00:04:15.183    16:53:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:04:15.183    16:53:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:15.183    16:53:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:15.184    16:53:38 json_config -- scripts/common.sh@368 -- # return 0
00:04:15.184    16:53:38 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:15.184    16:53:38 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:15.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.184  		--rc genhtml_branch_coverage=1
00:04:15.184  		--rc genhtml_function_coverage=1
00:04:15.184  		--rc genhtml_legend=1
00:04:15.184  		--rc geninfo_all_blocks=1
00:04:15.184  		--rc geninfo_unexecuted_blocks=1
00:04:15.184  		
00:04:15.184  		'
00:04:15.184    16:53:38 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:15.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.184  		--rc genhtml_branch_coverage=1
00:04:15.184  		--rc genhtml_function_coverage=1
00:04:15.184  		--rc genhtml_legend=1
00:04:15.184  		--rc geninfo_all_blocks=1
00:04:15.184  		--rc geninfo_unexecuted_blocks=1
00:04:15.184  		
00:04:15.184  		'
00:04:15.184    16:53:38 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:15.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.184  		--rc genhtml_branch_coverage=1
00:04:15.184  		--rc genhtml_function_coverage=1
00:04:15.184  		--rc genhtml_legend=1
00:04:15.184  		--rc geninfo_all_blocks=1
00:04:15.184  		--rc geninfo_unexecuted_blocks=1
00:04:15.184  		
00:04:15.184  		'
00:04:15.184    16:53:38 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:15.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.184  		--rc genhtml_branch_coverage=1
00:04:15.184  		--rc genhtml_function_coverage=1
00:04:15.184  		--rc genhtml_legend=1
00:04:15.184  		--rc geninfo_all_blocks=1
00:04:15.184  		--rc geninfo_unexecuted_blocks=1
00:04:15.184  		
00:04:15.184  		'
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:15.184     16:53:38 json_config -- nvmf/common.sh@7 -- # uname -s
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:15.184     16:53:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c7da8d14-0c7f-44c7-8845-095521e4a89c
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c7da8d14-0c7f-44c7-8845-095521e4a89c
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:15.184     16:53:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:04:15.184     16:53:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:15.184     16:53:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:15.184     16:53:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:15.184      16:53:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.184      16:53:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.184      16:53:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.184      16:53:38 json_config -- paths/export.sh@5 -- # export PATH
00:04:15.184      16:53:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@51 -- # : 0
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:15.184  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:15.184    16:53:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:04:15.184  WARNING: No tests are enabled so not running JSON configuration tests
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:04:15.184   16:53:38 json_config -- json_config/json_config.sh@28 -- # exit 0
00:04:15.184  
00:04:15.184  real	0m0.145s
00:04:15.184  user	0m0.089s
00:04:15.184  sys	0m0.060s
00:04:15.184   16:53:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:15.184   16:53:38 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:15.184  ************************************
00:04:15.184  END TEST json_config
00:04:15.184  ************************************
00:04:15.184   16:53:38  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:15.184   16:53:38  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:15.184   16:53:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:15.184   16:53:38  -- common/autotest_common.sh@10 -- # set +x
00:04:15.184  ************************************
00:04:15.184  START TEST json_config_extra_key
00:04:15.184  ************************************
00:04:15.184   16:53:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:15.442     16:53:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:04:15.442     16:53:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:15.442    16:53:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:15.442  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.442  		--rc genhtml_branch_coverage=1
00:04:15.442  		--rc genhtml_function_coverage=1
00:04:15.442  		--rc genhtml_legend=1
00:04:15.442  		--rc geninfo_all_blocks=1
00:04:15.442  		--rc geninfo_unexecuted_blocks=1
00:04:15.442  		
00:04:15.442  		'
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:15.442  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.442  		--rc genhtml_branch_coverage=1
00:04:15.442  		--rc genhtml_function_coverage=1
00:04:15.442  		--rc genhtml_legend=1
00:04:15.442  		--rc geninfo_all_blocks=1
00:04:15.442  		--rc geninfo_unexecuted_blocks=1
00:04:15.442  		
00:04:15.442  		'
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:15.442  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.442  		--rc genhtml_branch_coverage=1
00:04:15.442  		--rc genhtml_function_coverage=1
00:04:15.442  		--rc genhtml_legend=1
00:04:15.442  		--rc geninfo_all_blocks=1
00:04:15.442  		--rc geninfo_unexecuted_blocks=1
00:04:15.442  		
00:04:15.442  		'
00:04:15.442    16:53:38 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:15.442  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:15.442  		--rc genhtml_branch_coverage=1
00:04:15.442  		--rc genhtml_function_coverage=1
00:04:15.442  		--rc genhtml_legend=1
00:04:15.442  		--rc geninfo_all_blocks=1
00:04:15.442  		--rc geninfo_unexecuted_blocks=1
00:04:15.442  		
00:04:15.442  		'
00:04:15.442   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:15.442     16:53:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:15.442     16:53:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c7da8d14-0c7f-44c7-8845-095521e4a89c
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c7da8d14-0c7f-44c7-8845-095521e4a89c
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:15.442     16:53:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:15.442      16:53:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.442      16:53:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.442      16:53:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.442      16:53:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:04:15.442      16:53:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:15.442  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:15.442    16:53:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:15.443    16:53:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:15.443    16:53:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:15.443  INFO: launching applications...
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:04:15.443   16:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59051
00:04:15.443  Waiting for target to run...
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59051 /var/tmp/spdk_tgt.sock
00:04:15.443   16:53:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59051 ']'
00:04:15.443   16:53:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:15.443  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:15.443   16:53:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:15.443   16:53:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:15.443   16:53:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:15.443   16:53:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:15.443   16:53:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:15.443  [2024-12-09 16:53:38.437126] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:15.443  [2024-12-09 16:53:38.437245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ]
00:04:16.008  [2024-12-09 16:53:38.751623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:16.008  [2024-12-09 16:53:38.827251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:16.266   16:53:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:16.266   16:53:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:04:16.266  
00:04:16.266  INFO: shutting down applications...
00:04:16.266   16:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:04:16.266   16:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59051 ]]
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59051
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59051
00:04:16.266   16:53:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:04:16.830   16:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:04:16.830   16:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:16.830   16:53:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59051
00:04:16.830   16:53:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:04:17.395   16:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:04:17.395   16:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:17.395   16:53:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59051
00:04:17.395   16:53:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59051
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@43 -- # break
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:04:17.962   16:53:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:04:17.962  SPDK target shutdown done
00:04:17.962  Success
00:04:17.962   16:53:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:04:17.962  
00:04:17.962  real	0m2.584s
00:04:17.962  user	0m2.365s
00:04:17.962  sys	0m0.383s
00:04:17.962   16:53:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:17.962   16:53:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:17.962  ************************************
00:04:17.962  END TEST json_config_extra_key
00:04:17.962  ************************************
00:04:17.962   16:53:40  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:17.962   16:53:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:17.962   16:53:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:17.962   16:53:40  -- common/autotest_common.sh@10 -- # set +x
00:04:17.962  ************************************
00:04:17.962  START TEST alias_rpc
00:04:17.962  ************************************
00:04:17.962   16:53:40 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:17.962  * Looking for test storage...
00:04:17.962  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:17.962     16:53:40 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:17.962     16:53:40 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@345 -- # : 1
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:17.962     16:53:40 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:17.962    16:53:40 alias_rpc -- scripts/common.sh@368 -- # return 0
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:17.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:17.962  		--rc genhtml_branch_coverage=1
00:04:17.962  		--rc genhtml_function_coverage=1
00:04:17.962  		--rc genhtml_legend=1
00:04:17.962  		--rc geninfo_all_blocks=1
00:04:17.962  		--rc geninfo_unexecuted_blocks=1
00:04:17.962  		
00:04:17.962  		'
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:17.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:17.962  		--rc genhtml_branch_coverage=1
00:04:17.962  		--rc genhtml_function_coverage=1
00:04:17.962  		--rc genhtml_legend=1
00:04:17.962  		--rc geninfo_all_blocks=1
00:04:17.962  		--rc geninfo_unexecuted_blocks=1
00:04:17.962  		
00:04:17.962  		'
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:17.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:17.962  		--rc genhtml_branch_coverage=1
00:04:17.962  		--rc genhtml_function_coverage=1
00:04:17.962  		--rc genhtml_legend=1
00:04:17.962  		--rc geninfo_all_blocks=1
00:04:17.962  		--rc geninfo_unexecuted_blocks=1
00:04:17.962  		
00:04:17.962  		'
00:04:17.962    16:53:40 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:17.962  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:17.962  		--rc genhtml_branch_coverage=1
00:04:17.963  		--rc genhtml_function_coverage=1
00:04:17.963  		--rc genhtml_legend=1
00:04:17.963  		--rc geninfo_all_blocks=1
00:04:17.963  		--rc geninfo_unexecuted_blocks=1
00:04:17.963  		
00:04:17.963  		'
00:04:17.963   16:53:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:04:17.963   16:53:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59143
00:04:17.963   16:53:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59143
00:04:17.963   16:53:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59143 ']'
00:04:17.963   16:53:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:17.963   16:53:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:17.963   16:53:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:17.963  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:17.963   16:53:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:17.963   16:53:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:17.963   16:53:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:18.221  [2024-12-09 16:53:41.050155] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:18.221  [2024-12-09 16:53:41.050274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ]
00:04:18.221  [2024-12-09 16:53:41.207354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:18.479  [2024-12-09 16:53:41.305618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:19.045   16:53:41 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:19.045   16:53:41 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:04:19.045   16:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:04:19.302   16:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59143
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59143 ']'
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59143
00:04:19.302    16:53:42 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:19.302    16:53:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59143
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:19.302  killing process with pid 59143
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59143'
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 59143
00:04:19.302   16:53:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 59143
00:04:20.674  
00:04:20.674  real	0m2.836s
00:04:20.674  user	0m2.952s
00:04:20.674  sys	0m0.393s
00:04:20.674   16:53:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:20.674   16:53:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:20.674  ************************************
00:04:20.674  END TEST alias_rpc
00:04:20.674  ************************************
00:04:20.674   16:53:43  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:04:20.674   16:53:43  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:04:20.674   16:53:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:20.674   16:53:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:20.674   16:53:43  -- common/autotest_common.sh@10 -- # set +x
00:04:20.933  ************************************
00:04:20.933  START TEST spdkcli_tcp
00:04:20.933  ************************************
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:04:20.933  * Looking for test storage...
00:04:20.933  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:20.933     16:53:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:04:20.933     16:53:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:20.933     16:53:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:20.933    16:53:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:20.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:20.933  		--rc genhtml_branch_coverage=1
00:04:20.933  		--rc genhtml_function_coverage=1
00:04:20.933  		--rc genhtml_legend=1
00:04:20.933  		--rc geninfo_all_blocks=1
00:04:20.933  		--rc geninfo_unexecuted_blocks=1
00:04:20.933  		
00:04:20.933  		'
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:20.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:20.933  		--rc genhtml_branch_coverage=1
00:04:20.933  		--rc genhtml_function_coverage=1
00:04:20.933  		--rc genhtml_legend=1
00:04:20.933  		--rc geninfo_all_blocks=1
00:04:20.933  		--rc geninfo_unexecuted_blocks=1
00:04:20.933  		
00:04:20.933  		'
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:20.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:20.933  		--rc genhtml_branch_coverage=1
00:04:20.933  		--rc genhtml_function_coverage=1
00:04:20.933  		--rc genhtml_legend=1
00:04:20.933  		--rc geninfo_all_blocks=1
00:04:20.933  		--rc geninfo_unexecuted_blocks=1
00:04:20.933  		
00:04:20.933  		'
00:04:20.933    16:53:43 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:20.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:20.933  		--rc genhtml_branch_coverage=1
00:04:20.933  		--rc genhtml_function_coverage=1
00:04:20.933  		--rc genhtml_legend=1
00:04:20.933  		--rc geninfo_all_blocks=1
00:04:20.933  		--rc geninfo_unexecuted_blocks=1
00:04:20.933  		
00:04:20.933  		'
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:04:20.933    16:53:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:04:20.933    16:53:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59233
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59233
00:04:20.933   16:53:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59233 ']'
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:20.933  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:20.933   16:53:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:20.933  [2024-12-09 16:53:43.946945] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:20.933  [2024-12-09 16:53:43.947052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59233 ]
00:04:21.191  [2024-12-09 16:53:44.101891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:21.191  [2024-12-09 16:53:44.202512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:21.191  [2024-12-09 16:53:44.202623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:21.762   16:53:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:21.762   16:53:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:04:22.023   16:53:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59250
00:04:22.023   16:53:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:04:22.023   16:53:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:04:22.023  [
00:04:22.023    "bdev_malloc_delete",
00:04:22.023    "bdev_malloc_create",
00:04:22.023    "bdev_null_resize",
00:04:22.023    "bdev_null_delete",
00:04:22.023    "bdev_null_create",
00:04:22.023    "bdev_nvme_cuse_unregister",
00:04:22.023    "bdev_nvme_cuse_register",
00:04:22.023    "bdev_opal_new_user",
00:04:22.023    "bdev_opal_set_lock_state",
00:04:22.023    "bdev_opal_delete",
00:04:22.023    "bdev_opal_get_info",
00:04:22.023    "bdev_opal_create",
00:04:22.023    "bdev_nvme_opal_revert",
00:04:22.023    "bdev_nvme_opal_init",
00:04:22.023    "bdev_nvme_send_cmd",
00:04:22.023    "bdev_nvme_set_keys",
00:04:22.023    "bdev_nvme_get_path_iostat",
00:04:22.023    "bdev_nvme_get_mdns_discovery_info",
00:04:22.023    "bdev_nvme_stop_mdns_discovery",
00:04:22.023    "bdev_nvme_start_mdns_discovery",
00:04:22.023    "bdev_nvme_set_multipath_policy",
00:04:22.023    "bdev_nvme_set_preferred_path",
00:04:22.023    "bdev_nvme_get_io_paths",
00:04:22.023    "bdev_nvme_remove_error_injection",
00:04:22.023    "bdev_nvme_add_error_injection",
00:04:22.023    "bdev_nvme_get_discovery_info",
00:04:22.023    "bdev_nvme_stop_discovery",
00:04:22.023    "bdev_nvme_start_discovery",
00:04:22.023    "bdev_nvme_get_controller_health_info",
00:04:22.023    "bdev_nvme_disable_controller",
00:04:22.023    "bdev_nvme_enable_controller",
00:04:22.023    "bdev_nvme_reset_controller",
00:04:22.023    "bdev_nvme_get_transport_statistics",
00:04:22.023    "bdev_nvme_apply_firmware",
00:04:22.023    "bdev_nvme_detach_controller",
00:04:22.023    "bdev_nvme_get_controllers",
00:04:22.023    "bdev_nvme_attach_controller",
00:04:22.023    "bdev_nvme_set_hotplug",
00:04:22.023    "bdev_nvme_set_options",
00:04:22.023    "bdev_passthru_delete",
00:04:22.023    "bdev_passthru_create",
00:04:22.023    "bdev_lvol_set_parent_bdev",
00:04:22.023    "bdev_lvol_set_parent",
00:04:22.023    "bdev_lvol_check_shallow_copy",
00:04:22.023    "bdev_lvol_start_shallow_copy",
00:04:22.023    "bdev_lvol_grow_lvstore",
00:04:22.023    "bdev_lvol_get_lvols",
00:04:22.023    "bdev_lvol_get_lvstores",
00:04:22.023    "bdev_lvol_delete",
00:04:22.023    "bdev_lvol_set_read_only",
00:04:22.023    "bdev_lvol_resize",
00:04:22.023    "bdev_lvol_decouple_parent",
00:04:22.023    "bdev_lvol_inflate",
00:04:22.023    "bdev_lvol_rename",
00:04:22.023    "bdev_lvol_clone_bdev",
00:04:22.023    "bdev_lvol_clone",
00:04:22.023    "bdev_lvol_snapshot",
00:04:22.023    "bdev_lvol_create",
00:04:22.023    "bdev_lvol_delete_lvstore",
00:04:22.023    "bdev_lvol_rename_lvstore",
00:04:22.023    "bdev_lvol_create_lvstore",
00:04:22.023    "bdev_raid_set_options",
00:04:22.023    "bdev_raid_remove_base_bdev",
00:04:22.023    "bdev_raid_add_base_bdev",
00:04:22.023    "bdev_raid_delete",
00:04:22.023    "bdev_raid_create",
00:04:22.023    "bdev_raid_get_bdevs",
00:04:22.023    "bdev_error_inject_error",
00:04:22.023    "bdev_error_delete",
00:04:22.023    "bdev_error_create",
00:04:22.023    "bdev_split_delete",
00:04:22.023    "bdev_split_create",
00:04:22.023    "bdev_delay_delete",
00:04:22.023    "bdev_delay_create",
00:04:22.023    "bdev_delay_update_latency",
00:04:22.023    "bdev_zone_block_delete",
00:04:22.023    "bdev_zone_block_create",
00:04:22.023    "blobfs_create",
00:04:22.023    "blobfs_detect",
00:04:22.023    "blobfs_set_cache_size",
00:04:22.023    "bdev_xnvme_delete",
00:04:22.023    "bdev_xnvme_create",
00:04:22.023    "bdev_aio_delete",
00:04:22.023    "bdev_aio_rescan",
00:04:22.023    "bdev_aio_create",
00:04:22.023    "bdev_ftl_set_property",
00:04:22.023    "bdev_ftl_get_properties",
00:04:22.023    "bdev_ftl_get_stats",
00:04:22.023    "bdev_ftl_unmap",
00:04:22.023    "bdev_ftl_unload",
00:04:22.023    "bdev_ftl_delete",
00:04:22.023    "bdev_ftl_load",
00:04:22.023    "bdev_ftl_create",
00:04:22.023    "bdev_virtio_attach_controller",
00:04:22.023    "bdev_virtio_scsi_get_devices",
00:04:22.023    "bdev_virtio_detach_controller",
00:04:22.023    "bdev_virtio_blk_set_hotplug",
00:04:22.023    "bdev_iscsi_delete",
00:04:22.023    "bdev_iscsi_create",
00:04:22.023    "bdev_iscsi_set_options",
00:04:22.023    "accel_error_inject_error",
00:04:22.023    "ioat_scan_accel_module",
00:04:22.023    "dsa_scan_accel_module",
00:04:22.023    "iaa_scan_accel_module",
00:04:22.023    "keyring_file_remove_key",
00:04:22.023    "keyring_file_add_key",
00:04:22.023    "keyring_linux_set_options",
00:04:22.023    "fsdev_aio_delete",
00:04:22.023    "fsdev_aio_create",
00:04:22.023    "iscsi_get_histogram",
00:04:22.023    "iscsi_enable_histogram",
00:04:22.023    "iscsi_set_options",
00:04:22.023    "iscsi_get_auth_groups",
00:04:22.023    "iscsi_auth_group_remove_secret",
00:04:22.023    "iscsi_auth_group_add_secret",
00:04:22.023    "iscsi_delete_auth_group",
00:04:22.023    "iscsi_create_auth_group",
00:04:22.023    "iscsi_set_discovery_auth",
00:04:22.023    "iscsi_get_options",
00:04:22.023    "iscsi_target_node_request_logout",
00:04:22.023    "iscsi_target_node_set_redirect",
00:04:22.023    "iscsi_target_node_set_auth",
00:04:22.023    "iscsi_target_node_add_lun",
00:04:22.023    "iscsi_get_stats",
00:04:22.023    "iscsi_get_connections",
00:04:22.023    "iscsi_portal_group_set_auth",
00:04:22.023    "iscsi_start_portal_group",
00:04:22.023    "iscsi_delete_portal_group",
00:04:22.023    "iscsi_create_portal_group",
00:04:22.023    "iscsi_get_portal_groups",
00:04:22.023    "iscsi_delete_target_node",
00:04:22.023    "iscsi_target_node_remove_pg_ig_maps",
00:04:22.023    "iscsi_target_node_add_pg_ig_maps",
00:04:22.023    "iscsi_create_target_node",
00:04:22.023    "iscsi_get_target_nodes",
00:04:22.023    "iscsi_delete_initiator_group",
00:04:22.023    "iscsi_initiator_group_remove_initiators",
00:04:22.023    "iscsi_initiator_group_add_initiators",
00:04:22.023    "iscsi_create_initiator_group",
00:04:22.023    "iscsi_get_initiator_groups",
00:04:22.023    "nvmf_set_crdt",
00:04:22.023    "nvmf_set_config",
00:04:22.023    "nvmf_set_max_subsystems",
00:04:22.023    "nvmf_stop_mdns_prr",
00:04:22.023    "nvmf_publish_mdns_prr",
00:04:22.023    "nvmf_subsystem_get_listeners",
00:04:22.023    "nvmf_subsystem_get_qpairs",
00:04:22.023    "nvmf_subsystem_get_controllers",
00:04:22.023    "nvmf_get_stats",
00:04:22.023    "nvmf_get_transports",
00:04:22.023    "nvmf_create_transport",
00:04:22.023    "nvmf_get_targets",
00:04:22.023    "nvmf_delete_target",
00:04:22.023    "nvmf_create_target",
00:04:22.023    "nvmf_subsystem_allow_any_host",
00:04:22.023    "nvmf_subsystem_set_keys",
00:04:22.023    "nvmf_subsystem_remove_host",
00:04:22.023    "nvmf_subsystem_add_host",
00:04:22.023    "nvmf_ns_remove_host",
00:04:22.023    "nvmf_ns_add_host",
00:04:22.023    "nvmf_subsystem_remove_ns",
00:04:22.023    "nvmf_subsystem_set_ns_ana_group",
00:04:22.023    "nvmf_subsystem_add_ns",
00:04:22.023    "nvmf_subsystem_listener_set_ana_state",
00:04:22.023    "nvmf_discovery_get_referrals",
00:04:22.023    "nvmf_discovery_remove_referral",
00:04:22.023    "nvmf_discovery_add_referral",
00:04:22.023    "nvmf_subsystem_remove_listener",
00:04:22.023    "nvmf_subsystem_add_listener",
00:04:22.023    "nvmf_delete_subsystem",
00:04:22.023    "nvmf_create_subsystem",
00:04:22.023    "nvmf_get_subsystems",
00:04:22.023    "env_dpdk_get_mem_stats",
00:04:22.023    "nbd_get_disks",
00:04:22.023    "nbd_stop_disk",
00:04:22.023    "nbd_start_disk",
00:04:22.023    "ublk_recover_disk",
00:04:22.023    "ublk_get_disks",
00:04:22.023    "ublk_stop_disk",
00:04:22.023    "ublk_start_disk",
00:04:22.023    "ublk_destroy_target",
00:04:22.023    "ublk_create_target",
00:04:22.023    "virtio_blk_create_transport",
00:04:22.023    "virtio_blk_get_transports",
00:04:22.023    "vhost_controller_set_coalescing",
00:04:22.023    "vhost_get_controllers",
00:04:22.023    "vhost_delete_controller",
00:04:22.023    "vhost_create_blk_controller",
00:04:22.023    "vhost_scsi_controller_remove_target",
00:04:22.023    "vhost_scsi_controller_add_target",
00:04:22.023    "vhost_start_scsi_controller",
00:04:22.023    "vhost_create_scsi_controller",
00:04:22.023    "thread_set_cpumask",
00:04:22.023    "scheduler_set_options",
00:04:22.023    "framework_get_governor",
00:04:22.023    "framework_get_scheduler",
00:04:22.023    "framework_set_scheduler",
00:04:22.023    "framework_get_reactors",
00:04:22.023    "thread_get_io_channels",
00:04:22.023    "thread_get_pollers",
00:04:22.023    "thread_get_stats",
00:04:22.023    "framework_monitor_context_switch",
00:04:22.023    "spdk_kill_instance",
00:04:22.023    "log_enable_timestamps",
00:04:22.023    "log_get_flags",
00:04:22.023    "log_clear_flag",
00:04:22.023    "log_set_flag",
00:04:22.023    "log_get_level",
00:04:22.023    "log_set_level",
00:04:22.023    "log_get_print_level",
00:04:22.023    "log_set_print_level",
00:04:22.023    "framework_enable_cpumask_locks",
00:04:22.023    "framework_disable_cpumask_locks",
00:04:22.023    "framework_wait_init",
00:04:22.023    "framework_start_init",
00:04:22.023    "scsi_get_devices",
00:04:22.023    "bdev_get_histogram",
00:04:22.024    "bdev_enable_histogram",
00:04:22.024    "bdev_set_qos_limit",
00:04:22.024    "bdev_set_qd_sampling_period",
00:04:22.024    "bdev_get_bdevs",
00:04:22.024    "bdev_reset_iostat",
00:04:22.024    "bdev_get_iostat",
00:04:22.024    "bdev_examine",
00:04:22.024    "bdev_wait_for_examine",
00:04:22.024    "bdev_set_options",
00:04:22.024    "accel_get_stats",
00:04:22.024    "accel_set_options",
00:04:22.024    "accel_set_driver",
00:04:22.024    "accel_crypto_key_destroy",
00:04:22.024    "accel_crypto_keys_get",
00:04:22.024    "accel_crypto_key_create",
00:04:22.024    "accel_assign_opc",
00:04:22.024    "accel_get_module_info",
00:04:22.024    "accel_get_opc_assignments",
00:04:22.024    "vmd_rescan",
00:04:22.024    "vmd_remove_device",
00:04:22.024    "vmd_enable",
00:04:22.024    "sock_get_default_impl",
00:04:22.024    "sock_set_default_impl",
00:04:22.024    "sock_impl_set_options",
00:04:22.024    "sock_impl_get_options",
00:04:22.024    "iobuf_get_stats",
00:04:22.024    "iobuf_set_options",
00:04:22.024    "keyring_get_keys",
00:04:22.024    "framework_get_pci_devices",
00:04:22.024    "framework_get_config",
00:04:22.024    "framework_get_subsystems",
00:04:22.024    "fsdev_set_opts",
00:04:22.024    "fsdev_get_opts",
00:04:22.024    "trace_get_info",
00:04:22.024    "trace_get_tpoint_group_mask",
00:04:22.024    "trace_disable_tpoint_group",
00:04:22.024    "trace_enable_tpoint_group",
00:04:22.024    "trace_clear_tpoint_mask",
00:04:22.024    "trace_set_tpoint_mask",
00:04:22.024    "notify_get_notifications",
00:04:22.024    "notify_get_types",
00:04:22.024    "spdk_get_version",
00:04:22.024    "rpc_get_methods"
00:04:22.024  ]
00:04:22.024   16:53:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:04:22.024   16:53:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:22.024   16:53:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:22.024   16:53:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:04:22.024   16:53:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59233
00:04:22.024   16:53:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59233 ']'
00:04:22.024   16:53:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59233
00:04:22.024    16:53:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:04:22.024   16:53:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:22.024    16:53:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59233
00:04:22.281   16:53:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:22.281   16:53:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:22.281  killing process with pid 59233
00:04:22.281   16:53:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59233'
00:04:22.281   16:53:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59233
00:04:22.281   16:53:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59233
00:04:23.652  
00:04:23.652  real	0m2.843s
00:04:23.652  user	0m5.137s
00:04:23.652  sys	0m0.421s
00:04:23.652   16:53:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:23.652   16:53:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:23.652  ************************************
00:04:23.652  END TEST spdkcli_tcp
00:04:23.652  ************************************
00:04:23.652   16:53:46  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:23.652   16:53:46  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:23.652   16:53:46  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:23.652   16:53:46  -- common/autotest_common.sh@10 -- # set +x
00:04:23.652  ************************************
00:04:23.652  START TEST dpdk_mem_utility
00:04:23.652  ************************************
00:04:23.652   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:23.652  * Looking for test storage...
00:04:23.652  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:04:23.652    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:23.652     16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:23.652     16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:04:23.910    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:23.910     16:53:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:04:23.910  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:23.910    16:53:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:04:23.910    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:23.910    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:23.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:23.910  		--rc genhtml_branch_coverage=1
00:04:23.910  		--rc genhtml_function_coverage=1
00:04:23.910  		--rc genhtml_legend=1
00:04:23.910  		--rc geninfo_all_blocks=1
00:04:23.910  		--rc geninfo_unexecuted_blocks=1
00:04:23.910  		
00:04:23.910  		'
00:04:23.910    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:23.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:23.910  		--rc genhtml_branch_coverage=1
00:04:23.910  		--rc genhtml_function_coverage=1
00:04:23.910  		--rc genhtml_legend=1
00:04:23.910  		--rc geninfo_all_blocks=1
00:04:23.910  		--rc geninfo_unexecuted_blocks=1
00:04:23.910  		
00:04:23.910  		'
00:04:23.910    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:23.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:23.910  		--rc genhtml_branch_coverage=1
00:04:23.910  		--rc genhtml_function_coverage=1
00:04:23.910  		--rc genhtml_legend=1
00:04:23.910  		--rc geninfo_all_blocks=1
00:04:23.910  		--rc geninfo_unexecuted_blocks=1
00:04:23.910  		
00:04:23.910  		'
00:04:23.910    16:53:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:23.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:23.910  		--rc genhtml_branch_coverage=1
00:04:23.910  		--rc genhtml_function_coverage=1
00:04:23.910  		--rc genhtml_legend=1
00:04:23.910  		--rc geninfo_all_blocks=1
00:04:23.910  		--rc geninfo_unexecuted_blocks=1
00:04:23.910  		
00:04:23.910  		'
00:04:23.910   16:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:04:23.910   16:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59344
00:04:23.910   16:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59344
00:04:23.910   16:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:23.910   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59344 ']'
00:04:23.910   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:23.910   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:23.910   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:23.910   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:23.910   16:53:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:23.911  [2024-12-09 16:53:46.812286] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:23.911  [2024-12-09 16:53:46.812433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59344 ]
00:04:24.168  [2024-12-09 16:53:46.976107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:24.168  [2024-12-09 16:53:47.074312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:24.735   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:24.735   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:04:24.735   16:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:04:24.735   16:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:04:24.735   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:24.735   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:24.735  {
00:04:24.735  "filename": "/tmp/spdk_mem_dump.txt"
00:04:24.735  }
00:04:24.735   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:24.735   16:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:04:24.735  DPDK memory size 824.000000 MiB in 1 heap(s)
00:04:24.735  1 heaps totaling size 824.000000 MiB
00:04:24.735    size:  824.000000 MiB heap id: 0
00:04:24.735  end heaps----------
00:04:24.735  9 mempools totaling size 603.782043 MiB
00:04:24.735    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:04:24.735    size:  158.602051 MiB name: PDU_data_out_Pool
00:04:24.735    size:  100.555481 MiB name: bdev_io_59344
00:04:24.735    size:   50.003479 MiB name: msgpool_59344
00:04:24.735    size:   36.509338 MiB name: fsdev_io_59344
00:04:24.735    size:   21.763794 MiB name: PDU_Pool
00:04:24.735    size:   19.513306 MiB name: SCSI_TASK_Pool
00:04:24.735    size:    4.133484 MiB name: evtpool_59344
00:04:24.735    size:    0.026123 MiB name: Session_Pool
00:04:24.735  end mempools-------
00:04:24.735  6 memzones totaling size 4.142822 MiB
00:04:24.735    size:    1.000366 MiB name: RG_ring_0_59344
00:04:24.735    size:    1.000366 MiB name: RG_ring_1_59344
00:04:24.735    size:    1.000366 MiB name: RG_ring_4_59344
00:04:24.735    size:    1.000366 MiB name: RG_ring_5_59344
00:04:24.735    size:    0.125366 MiB name: RG_ring_2_59344
00:04:24.735    size:    0.015991 MiB name: RG_ring_3_59344
00:04:24.735  end memzones-------
00:04:24.735   16:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:04:24.735  heap id: 0 total size: 824.000000 MiB number of busy elements: 327 number of free elements: 18
00:04:24.735    list of free elements. size: 16.778442 MiB
00:04:24.735      element at address: 0x200006400000 with size:    1.995972 MiB
00:04:24.735      element at address: 0x20000a600000 with size:    1.995972 MiB
00:04:24.735      element at address: 0x200003e00000 with size:    1.991028 MiB
00:04:24.735      element at address: 0x200019500040 with size:    0.999939 MiB
00:04:24.735      element at address: 0x200019900040 with size:    0.999939 MiB
00:04:24.735      element at address: 0x200019a00000 with size:    0.999084 MiB
00:04:24.735      element at address: 0x200032600000 with size:    0.994324 MiB
00:04:24.735      element at address: 0x200000400000 with size:    0.992004 MiB
00:04:24.735      element at address: 0x200019200000 with size:    0.959656 MiB
00:04:24.735      element at address: 0x200019d00040 with size:    0.936401 MiB
00:04:24.735      element at address: 0x200000200000 with size:    0.716980 MiB
00:04:24.735      element at address: 0x20001b400000 with size:    0.559265 MiB
00:04:24.735      element at address: 0x200000c00000 with size:    0.489197 MiB
00:04:24.735      element at address: 0x200019600000 with size:    0.488220 MiB
00:04:24.736      element at address: 0x200019e00000 with size:    0.485413 MiB
00:04:24.736      element at address: 0x200012c00000 with size:    0.433472 MiB
00:04:24.736      element at address: 0x200028800000 with size:    0.390686 MiB
00:04:24.736      element at address: 0x200000800000 with size:    0.350891 MiB
00:04:24.736    list of standard malloc elements. size: 199.290649 MiB
00:04:24.736      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:04:24.736      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:04:24.736      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:04:24.736      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:04:24.736      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:04:24.736      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:04:24.736      element at address: 0x200019deff40 with size:    0.062683 MiB
00:04:24.736      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:04:24.736      element at address: 0x20000a5ff040 with size:    0.000427 MiB
00:04:24.736      element at address: 0x200019defdc0 with size:    0.000366 MiB
00:04:24.736      element at address: 0x200012bff040 with size:    0.000305 MiB
00:04:24.736      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fdf40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe040 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe140 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe240 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe340 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe440 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe540 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe640 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe740 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe840 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fe940 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fea40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004feb40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fec40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fed40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fee40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004fef40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff040 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff140 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff240 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff340 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff440 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff540 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff640 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff740 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ffbc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e1c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e2c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e3c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e4c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e5c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e6c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e7c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e8c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087e9c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087eac0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087ebc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087ecc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087edc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087eec0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087efc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087f0c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087f1c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087f2c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d3c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d4c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d5c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d6c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d7c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d8c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7d9c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7dac0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7dbc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7dcc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7ddc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7dec0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7dfc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e0c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e1c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e2c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e3c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e4c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e5c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e6c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e7c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e8c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7e9c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7eac0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000c7ebc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200000cff000 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff200 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff300 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff400 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff500 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff600 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff700 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff800 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ff900 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ffa00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ffb00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff180 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff280 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff380 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff480 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff580 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff680 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff780 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff880 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bff980 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bffa80 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6ef80 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f080 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f180 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f280 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f380 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f480 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f580 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f680 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f780 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012c6f880 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200012cefbc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000192fdd00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967cfc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d0c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d1c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d2c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d3c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d4c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d5c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d6c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d7c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d8c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001967d9c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x2000196fdd00 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200019affc40 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200019defbc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200019defcc0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x200019ebc680 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001b48f2c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001b48f3c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001b48f4c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001b48f5c0 with size:    0.000244 MiB
00:04:24.736      element at address: 0x20001b48f6c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48f7c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48f8c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48f9c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48fac0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48fbc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48fcc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48fdc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48fec0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b48ffc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4900c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4901c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4902c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4903c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4904c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4905c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4906c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4907c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4908c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4909c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b490ac0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b490bc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b490cc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b490dc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b490ec0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b490fc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4910c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4911c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4912c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4913c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4914c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4915c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4916c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4917c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4918c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4919c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b491ac0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b491bc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b491cc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b491dc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b491ec0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b491fc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4920c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4921c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4922c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4923c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4924c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4925c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4926c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4927c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4928c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4929c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b492ac0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b492bc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b492cc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b492dc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b492ec0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b492fc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4930c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4931c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4932c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4933c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4934c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4935c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4936c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4937c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4938c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4939c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b493ac0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b493bc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b493cc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b493dc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b493ec0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b493fc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4940c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4941c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4942c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4943c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4944c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4945c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4946c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4947c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4948c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4949c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b494ac0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b494bc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b494cc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b494dc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b494ec0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b494fc0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4950c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4951c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4952c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20001b4953c0 with size:    0.000244 MiB
00:04:24.737      element at address: 0x200028864040 with size:    0.000244 MiB
00:04:24.737      element at address: 0x200028864140 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886ae00 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b080 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b180 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b280 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b380 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b480 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b580 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b680 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b780 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b880 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886b980 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886ba80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886bb80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886bc80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886bd80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886be80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886bf80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c080 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c180 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c280 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c380 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c480 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c580 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c680 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c780 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c880 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886c980 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886ca80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886cb80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886cc80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886cd80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886ce80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886cf80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d080 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d180 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d280 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d380 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d480 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d580 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d680 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d780 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d880 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886d980 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886da80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886db80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886dc80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886dd80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886de80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886df80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e080 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e180 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e280 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e380 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e480 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e580 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e680 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e780 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e880 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886e980 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886ea80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886eb80 with size:    0.000244 MiB
00:04:24.737      element at address: 0x20002886ec80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886ed80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886ee80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886ef80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f080 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f180 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f280 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f380 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f480 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f580 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f680 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f780 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f880 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886f980 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886fa80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886fb80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886fc80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886fd80 with size:    0.000244 MiB
00:04:24.738      element at address: 0x20002886fe80 with size:    0.000244 MiB
00:04:24.738    list of memzone associated elements. size: 607.930908 MiB
00:04:24.738      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:04:24.738        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:04:24.738      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:04:24.738        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:04:24.738      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:04:24.738        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_59344_0
00:04:24.738      element at address: 0x200000dff340 with size:   48.003113 MiB
00:04:24.738        associated memzone info: size:   48.002930 MiB name: MP_msgpool_59344_0
00:04:24.738      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:04:24.738        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_59344_0
00:04:24.738      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:04:24.738        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:04:24.738      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:04:24.738        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:04:24.738      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:04:24.738        associated memzone info: size:    3.000122 MiB name: MP_evtpool_59344_0
00:04:24.738      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:04:24.738        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_59344
00:04:24.738      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:04:24.738        associated memzone info: size:    1.007996 MiB name: MP_evtpool_59344
00:04:24.738      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:04:24.738        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:04:24.738      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:04:24.738        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:04:24.738      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:04:24.738        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:04:24.738      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:04:24.738        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:04:24.738      element at address: 0x200000cff100 with size:    1.000549 MiB
00:04:24.738        associated memzone info: size:    1.000366 MiB name: RG_ring_0_59344
00:04:24.738      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:04:24.738        associated memzone info: size:    1.000366 MiB name: RG_ring_1_59344
00:04:24.738      element at address: 0x200019affd40 with size:    1.000549 MiB
00:04:24.738        associated memzone info: size:    1.000366 MiB name: RG_ring_4_59344
00:04:24.738      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:04:24.738        associated memzone info: size:    1.000366 MiB name: RG_ring_5_59344
00:04:24.738      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:04:24.738        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_59344
00:04:24.738      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:04:24.738        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_59344
00:04:24.738      element at address: 0x20001967dac0 with size:    0.500549 MiB
00:04:24.738        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:04:24.738      element at address: 0x200012c6f980 with size:    0.500549 MiB
00:04:24.738        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:04:24.738      element at address: 0x200019e7c440 with size:    0.250549 MiB
00:04:24.738        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:04:24.738      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:04:24.738        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_59344
00:04:24.738      element at address: 0x20000085df80 with size:    0.125549 MiB
00:04:24.738        associated memzone info: size:    0.125366 MiB name: RG_ring_2_59344
00:04:24.738      element at address: 0x2000192f5ac0 with size:    0.031799 MiB
00:04:24.738        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:04:24.738      element at address: 0x200028864240 with size:    0.023804 MiB
00:04:24.738        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:04:24.738      element at address: 0x200000859d40 with size:    0.016174 MiB
00:04:24.738        associated memzone info: size:    0.015991 MiB name: RG_ring_3_59344
00:04:24.738      element at address: 0x20002886a3c0 with size:    0.002502 MiB
00:04:24.738        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:04:24.738      element at address: 0x2000004ffa40 with size:    0.000366 MiB
00:04:24.738        associated memzone info: size:    0.000183 MiB name: MP_msgpool_59344
00:04:24.738      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:04:24.738        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_59344
00:04:24.738      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:04:24.738        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_59344
00:04:24.738      element at address: 0x20002886af00 with size:    0.000366 MiB
00:04:24.738        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:04:24.995   16:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:04:24.995   16:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59344
00:04:24.995   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59344 ']'
00:04:24.995   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59344
00:04:24.995    16:53:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:04:24.996   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:24.996    16:53:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59344
00:04:24.996  killing process with pid 59344
00:04:24.996   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:24.996   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:24.996   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59344'
00:04:24.996   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59344
00:04:24.996   16:53:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59344
00:04:26.371  
00:04:26.371  real	0m2.634s
00:04:26.371  user	0m2.683s
00:04:26.371  sys	0m0.378s
00:04:26.371  ************************************
00:04:26.371  END TEST dpdk_mem_utility
00:04:26.371  ************************************
00:04:26.371   16:53:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:26.371   16:53:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:26.371   16:53:49  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:04:26.371   16:53:49  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:26.371   16:53:49  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:26.371   16:53:49  -- common/autotest_common.sh@10 -- # set +x
00:04:26.371  ************************************
00:04:26.371  START TEST event
00:04:26.371  ************************************
00:04:26.371   16:53:49 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:04:26.371  * Looking for test storage...
00:04:26.371  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:04:26.371    16:53:49 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:26.371     16:53:49 event -- common/autotest_common.sh@1711 -- # lcov --version
00:04:26.371     16:53:49 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:26.371    16:53:49 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:26.371    16:53:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:26.371    16:53:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:26.371    16:53:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:26.371    16:53:49 event -- scripts/common.sh@336 -- # IFS=.-:
00:04:26.371    16:53:49 event -- scripts/common.sh@336 -- # read -ra ver1
00:04:26.371    16:53:49 event -- scripts/common.sh@337 -- # IFS=.-:
00:04:26.371    16:53:49 event -- scripts/common.sh@337 -- # read -ra ver2
00:04:26.371    16:53:49 event -- scripts/common.sh@338 -- # local 'op=<'
00:04:26.371    16:53:49 event -- scripts/common.sh@340 -- # ver1_l=2
00:04:26.371    16:53:49 event -- scripts/common.sh@341 -- # ver2_l=1
00:04:26.371    16:53:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:26.371    16:53:49 event -- scripts/common.sh@344 -- # case "$op" in
00:04:26.371    16:53:49 event -- scripts/common.sh@345 -- # : 1
00:04:26.371    16:53:49 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:26.371    16:53:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:26.371     16:53:49 event -- scripts/common.sh@365 -- # decimal 1
00:04:26.371     16:53:49 event -- scripts/common.sh@353 -- # local d=1
00:04:26.371     16:53:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:26.371     16:53:49 event -- scripts/common.sh@355 -- # echo 1
00:04:26.371    16:53:49 event -- scripts/common.sh@365 -- # ver1[v]=1
00:04:26.630     16:53:49 event -- scripts/common.sh@366 -- # decimal 2
00:04:26.630     16:53:49 event -- scripts/common.sh@353 -- # local d=2
00:04:26.630     16:53:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:26.630     16:53:49 event -- scripts/common.sh@355 -- # echo 2
00:04:26.630    16:53:49 event -- scripts/common.sh@366 -- # ver2[v]=2
00:04:26.630    16:53:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:26.630    16:53:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:26.630    16:53:49 event -- scripts/common.sh@368 -- # return 0
00:04:26.630    16:53:49 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:26.630    16:53:49 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:26.630  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:26.630  		--rc genhtml_branch_coverage=1
00:04:26.630  		--rc genhtml_function_coverage=1
00:04:26.630  		--rc genhtml_legend=1
00:04:26.630  		--rc geninfo_all_blocks=1
00:04:26.630  		--rc geninfo_unexecuted_blocks=1
00:04:26.630  		
00:04:26.630  		'
00:04:26.630    16:53:49 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:26.630  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:26.630  		--rc genhtml_branch_coverage=1
00:04:26.630  		--rc genhtml_function_coverage=1
00:04:26.630  		--rc genhtml_legend=1
00:04:26.630  		--rc geninfo_all_blocks=1
00:04:26.630  		--rc geninfo_unexecuted_blocks=1
00:04:26.630  		
00:04:26.630  		'
00:04:26.630    16:53:49 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:26.630  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:26.630  		--rc genhtml_branch_coverage=1
00:04:26.630  		--rc genhtml_function_coverage=1
00:04:26.630  		--rc genhtml_legend=1
00:04:26.630  		--rc geninfo_all_blocks=1
00:04:26.630  		--rc geninfo_unexecuted_blocks=1
00:04:26.630  		
00:04:26.630  		'
00:04:26.630    16:53:49 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:26.630  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:26.630  		--rc genhtml_branch_coverage=1
00:04:26.630  		--rc genhtml_function_coverage=1
00:04:26.630  		--rc genhtml_legend=1
00:04:26.630  		--rc geninfo_all_blocks=1
00:04:26.630  		--rc geninfo_unexecuted_blocks=1
00:04:26.630  		
00:04:26.630  		'
00:04:26.630   16:53:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:04:26.630    16:53:49 event -- bdev/nbd_common.sh@6 -- # set -e
00:04:26.630   16:53:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:26.630   16:53:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:04:26.630   16:53:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:26.630   16:53:49 event -- common/autotest_common.sh@10 -- # set +x
00:04:26.630  ************************************
00:04:26.630  START TEST event_perf
00:04:26.630  ************************************
00:04:26.630   16:53:49 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:26.630  Running I/O for 1 seconds...[2024-12-09 16:53:49.449303] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:26.630  [2024-12-09 16:53:49.449508] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59436 ]
00:04:26.630  [2024-12-09 16:53:49.607837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:26.888  [2024-12-09 16:53:49.693946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:26.888  [2024-12-09 16:53:49.694334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:26.888  Running I/O for 1 seconds...[2024-12-09 16:53:49.694366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:04:26.888  [2024-12-09 16:53:49.694049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:04:27.822  
00:04:27.822  lcore  0:   147216
00:04:27.822  lcore  1:   147218
00:04:27.822  lcore  2:   147219
00:04:27.822  lcore  3:   147217
00:04:27.822  done.
00:04:27.822  
00:04:27.822  real	0m1.406s
00:04:27.822  user	0m4.205s
00:04:27.822  sys	0m0.076s
00:04:27.822   16:53:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:27.822   16:53:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:04:27.822  ************************************
00:04:27.822  END TEST event_perf
00:04:27.822  ************************************
00:04:28.080   16:53:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:04:28.080   16:53:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:28.080   16:53:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:28.080   16:53:50 event -- common/autotest_common.sh@10 -- # set +x
00:04:28.080  ************************************
00:04:28.080  START TEST event_reactor
00:04:28.080  ************************************
00:04:28.080   16:53:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:04:28.080  [2024-12-09 16:53:50.907745] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:28.080  [2024-12-09 16:53:50.907841] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59475 ]
00:04:28.080  [2024-12-09 16:53:51.063445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:28.338  [2024-12-09 16:53:51.144581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:29.271  test_start
00:04:29.271  oneshot
00:04:29.271  tick 100
00:04:29.271  tick 100
00:04:29.271  tick 250
00:04:29.271  tick 100
00:04:29.271  tick 100
00:04:29.271  tick 250
00:04:29.271  tick 100
00:04:29.271  tick 500
00:04:29.271  tick 100
00:04:29.271  tick 100
00:04:29.271  tick 250
00:04:29.271  tick 100
00:04:29.271  tick 100
00:04:29.271  test_end
00:04:29.271  
00:04:29.271  real	0m1.390s
00:04:29.271  user	0m1.218s
00:04:29.271  sys	0m0.065s
00:04:29.271   16:53:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:29.271   16:53:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:04:29.271  ************************************
00:04:29.271  END TEST event_reactor
00:04:29.271  ************************************
00:04:29.271   16:53:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:29.271   16:53:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:29.271   16:53:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:29.271   16:53:52 event -- common/autotest_common.sh@10 -- # set +x
00:04:29.271  ************************************
00:04:29.271  START TEST event_reactor_perf
00:04:29.271  ************************************
00:04:29.271   16:53:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:29.529  [2024-12-09 16:53:52.333322] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:29.529  [2024-12-09 16:53:52.333584] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ]
00:04:29.529  [2024-12-09 16:53:52.494783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:29.787  [2024-12-09 16:53:52.589681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:30.723  test_start
00:04:30.723  test_end
00:04:30.723  Performance:   312370 events per second
00:04:30.723  ************************************
00:04:30.723  END TEST event_reactor_perf
00:04:30.723  ************************************
00:04:30.723  
00:04:30.723  real	0m1.439s
00:04:30.723  user	0m1.266s
00:04:30.723  sys	0m0.066s
00:04:30.723   16:53:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:30.723   16:53:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:04:30.982    16:53:53 event -- event/event.sh@49 -- # uname -s
00:04:30.982   16:53:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:04:30.982   16:53:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:04:30.982   16:53:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:30.982   16:53:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:30.982   16:53:53 event -- common/autotest_common.sh@10 -- # set +x
00:04:30.982  ************************************
00:04:30.982  START TEST event_scheduler
00:04:30.982  ************************************
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:04:30.982  * Looking for test storage...
00:04:30.982  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:30.982     16:53:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:30.982     16:53:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:30.982     16:53:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:30.982    16:53:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:30.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.982  		--rc genhtml_branch_coverage=1
00:04:30.982  		--rc genhtml_function_coverage=1
00:04:30.982  		--rc genhtml_legend=1
00:04:30.982  		--rc geninfo_all_blocks=1
00:04:30.982  		--rc geninfo_unexecuted_blocks=1
00:04:30.982  		
00:04:30.982  		'
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:30.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.982  		--rc genhtml_branch_coverage=1
00:04:30.982  		--rc genhtml_function_coverage=1
00:04:30.982  		--rc genhtml_legend=1
00:04:30.982  		--rc geninfo_all_blocks=1
00:04:30.982  		--rc geninfo_unexecuted_blocks=1
00:04:30.982  		
00:04:30.982  		'
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:30.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.982  		--rc genhtml_branch_coverage=1
00:04:30.982  		--rc genhtml_function_coverage=1
00:04:30.982  		--rc genhtml_legend=1
00:04:30.982  		--rc geninfo_all_blocks=1
00:04:30.982  		--rc geninfo_unexecuted_blocks=1
00:04:30.982  		
00:04:30.982  		'
00:04:30.982    16:53:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:30.982  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:30.982  		--rc genhtml_branch_coverage=1
00:04:30.982  		--rc genhtml_function_coverage=1
00:04:30.982  		--rc genhtml_legend=1
00:04:30.982  		--rc geninfo_all_blocks=1
00:04:30.982  		--rc geninfo_unexecuted_blocks=1
00:04:30.982  		
00:04:30.982  		'
00:04:30.982   16:53:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:04:30.982   16:53:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59582
00:04:30.982   16:53:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:04:30.982   16:53:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59582
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59582 ']'
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:30.982  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:30.982   16:53:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:30.982   16:53:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:30.983  [2024-12-09 16:53:54.007912] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:30.983  [2024-12-09 16:53:54.008046] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ]
00:04:31.241  [2024-12-09 16:53:54.172559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:31.241  [2024-12-09 16:53:54.277724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:31.241  [2024-12-09 16:53:54.277843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:31.241  [2024-12-09 16:53:54.279114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:04:31.241  [2024-12-09 16:53:54.279117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:04:31.807   16:53:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:31.807  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:31.807  POWER: Cannot set governor of lcore 0 to userspace
00:04:31.807  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:31.807  POWER: Cannot set governor of lcore 0 to performance
00:04:31.807  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:31.807  POWER: Cannot set governor of lcore 0 to userspace
00:04:31.807  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:04:31.807  POWER: Cannot set governor of lcore 0 to userspace
00:04:31.807  GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0
00:04:31.807  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:04:31.807  POWER: Unable to set Power Management Environment for lcore 0
00:04:31.807  [2024-12-09 16:53:54.793406] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:04:31.807  [2024-12-09 16:53:54.793461] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:04:31.807  [2024-12-09 16:53:54.793636] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:04:31.807  [2024-12-09 16:53:54.793707] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:04:31.807  [2024-12-09 16:53:54.793752] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:04:31.807  [2024-12-09 16:53:54.793796] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:31.807   16:53:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:31.807   16:53:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  [2024-12-09 16:53:55.018995] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:04:32.066   16:53:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.066   16:53:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:04:32.066   16:53:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:32.066   16:53:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  ************************************
00:04:32.066  START TEST scheduler_create_thread
00:04:32.066  ************************************
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  2
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  3
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  4
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  5
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  6
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.066  7
00:04:32.066   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.067  8
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.067  9
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.067  10
00:04:32.067   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:32.325    16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:32.325   16:53:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:33.258  ************************************
00:04:33.258  END TEST scheduler_create_thread
00:04:33.258  ************************************
00:04:33.258   16:53:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:33.258  
00:04:33.258  real	0m1.171s
00:04:33.258  user	0m0.015s
00:04:33.258  sys	0m0.005s
00:04:33.258   16:53:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:33.258   16:53:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:33.258   16:53:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:04:33.258   16:53:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59582
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59582 ']'
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59582
00:04:33.258    16:53:56 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:33.258    16:53:56 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59582
00:04:33.258  killing process with pid 59582
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59582'
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59582
00:04:33.258   16:53:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59582
00:04:33.825  [2024-12-09 16:53:56.680078] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:04:34.397  
00:04:34.397  real	0m3.479s
00:04:34.397  user	0m5.438s
00:04:34.397  sys	0m0.360s
00:04:34.397   16:53:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:34.397   16:53:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:34.397  ************************************
00:04:34.397  END TEST event_scheduler
00:04:34.397  ************************************
00:04:34.397   16:53:57 event -- event/event.sh@51 -- # modprobe -n nbd
00:04:34.397   16:53:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:04:34.397   16:53:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:34.397   16:53:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:34.397   16:53:57 event -- common/autotest_common.sh@10 -- # set +x
00:04:34.397  ************************************
00:04:34.397  START TEST app_repeat
00:04:34.397  ************************************
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:04:34.397  Process app_repeat pid: 59672
00:04:34.397  spdk_app_start Round 0
00:04:34.397  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59672
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59672'
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59672 /var/tmp/spdk-nbd.sock
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59672 ']'
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:34.397   16:53:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:34.397   16:53:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:34.397  [2024-12-09 16:53:57.350311] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:34.397  [2024-12-09 16:53:57.350403] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59672 ]
00:04:34.654  [2024-12-09 16:53:57.504579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:34.654  [2024-12-09 16:53:57.605856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:34.654  [2024-12-09 16:53:57.605892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:35.220   16:53:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:35.220   16:53:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:35.220   16:53:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:35.478  Malloc0
00:04:35.478   16:53:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:35.736  Malloc1
00:04:35.736   16:53:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:35.736   16:53:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:35.995  /dev/nbd0
00:04:35.995    16:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:35.995   16:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:35.995  1+0 records in
00:04:35.995  1+0 records out
00:04:35.995  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694655 s, 5.9 MB/s
00:04:35.995    16:53:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:35.995   16:53:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:35.995   16:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:35.995   16:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:35.995   16:53:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:36.253  /dev/nbd1
00:04:36.253    16:53:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:36.253   16:53:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:36.253  1+0 records in
00:04:36.253  1+0 records out
00:04:36.253  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031135 s, 13.2 MB/s
00:04:36.253    16:53:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:36.253   16:53:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:36.253   16:53:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:36.253   16:53:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:36.253    16:53:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:36.253    16:53:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:36.253     16:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:36.511    16:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:36.511    {
00:04:36.511      "nbd_device": "/dev/nbd0",
00:04:36.511      "bdev_name": "Malloc0"
00:04:36.511    },
00:04:36.511    {
00:04:36.511      "nbd_device": "/dev/nbd1",
00:04:36.511      "bdev_name": "Malloc1"
00:04:36.511    }
00:04:36.511  ]'
00:04:36.511     16:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:36.511    {
00:04:36.511      "nbd_device": "/dev/nbd0",
00:04:36.511      "bdev_name": "Malloc0"
00:04:36.511    },
00:04:36.511    {
00:04:36.511      "nbd_device": "/dev/nbd1",
00:04:36.511      "bdev_name": "Malloc1"
00:04:36.511    }
00:04:36.511  ]'
00:04:36.511     16:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:36.511    16:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:36.511  /dev/nbd1'
00:04:36.511     16:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:36.511     16:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:36.511  /dev/nbd1'
00:04:36.511    16:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:36.511    16:53:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:36.511  256+0 records in
00:04:36.511  256+0 records out
00:04:36.511  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998818 s, 105 MB/s
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:36.511  256+0 records in
00:04:36.511  256+0 records out
00:04:36.511  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180749 s, 58.0 MB/s
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:36.511  256+0 records in
00:04:36.511  256+0 records out
00:04:36.511  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205981 s, 50.9 MB/s
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:36.511   16:53:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:36.769    16:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:36.769   16:53:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:37.030    16:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:37.030   16:53:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:37.030    16:53:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:37.030    16:53:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:37.030     16:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:37.289    16:54:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:37.289     16:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:37.289     16:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:37.289    16:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:37.289     16:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:37.289     16:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:37.289     16:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:37.289    16:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:37.289    16:54:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:37.289   16:54:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:37.289   16:54:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:37.289   16:54:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:37.289   16:54:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:37.547   16:54:00 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:38.481  [2024-12-09 16:54:01.243058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:38.482  [2024-12-09 16:54:01.318313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:38.482  [2024-12-09 16:54:01.318336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:38.482  [2024-12-09 16:54:01.423460] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:38.482  [2024-12-09 16:54:01.423526] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:41.009  spdk_app_start Round 1
00:04:41.009  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:41.009   16:54:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:41.009   16:54:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:04:41.009   16:54:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59672 /var/tmp/spdk-nbd.sock
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59672 ']'
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:41.009   16:54:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:41.009   16:54:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:41.009  Malloc0
00:04:41.009   16:54:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:41.268  Malloc1
00:04:41.268   16:54:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:41.268   16:54:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:41.526  /dev/nbd0
00:04:41.526    16:54:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:41.526   16:54:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:41.526  1+0 records in
00:04:41.526  1+0 records out
00:04:41.526  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490068 s, 8.4 MB/s
00:04:41.526    16:54:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:41.526   16:54:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:41.526   16:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:41.526   16:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:41.526   16:54:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:41.784  /dev/nbd1
00:04:41.784    16:54:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:41.784   16:54:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:41.784  1+0 records in
00:04:41.784  1+0 records out
00:04:41.784  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262769 s, 15.6 MB/s
00:04:41.784    16:54:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:41.784   16:54:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:41.784   16:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:41.784   16:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:41.784    16:54:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:41.784    16:54:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:41.784     16:54:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:42.043    16:54:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:42.043    {
00:04:42.043      "nbd_device": "/dev/nbd0",
00:04:42.043      "bdev_name": "Malloc0"
00:04:42.043    },
00:04:42.043    {
00:04:42.043      "nbd_device": "/dev/nbd1",
00:04:42.043      "bdev_name": "Malloc1"
00:04:42.043    }
00:04:42.043  ]'
00:04:42.043     16:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:42.043     16:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:42.043    {
00:04:42.043      "nbd_device": "/dev/nbd0",
00:04:42.043      "bdev_name": "Malloc0"
00:04:42.043    },
00:04:42.043    {
00:04:42.043      "nbd_device": "/dev/nbd1",
00:04:42.043      "bdev_name": "Malloc1"
00:04:42.043    }
00:04:42.043  ]'
00:04:42.043    16:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:42.043  /dev/nbd1'
00:04:42.043     16:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:42.043     16:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:42.043  /dev/nbd1'
00:04:42.043    16:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:42.043    16:54:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:42.043  256+0 records in
00:04:42.043  256+0 records out
00:04:42.043  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555291 s, 189 MB/s
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:42.043  256+0 records in
00:04:42.043  256+0 records out
00:04:42.043  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129654 s, 80.9 MB/s
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:42.043  256+0 records in
00:04:42.043  256+0 records out
00:04:42.043  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153811 s, 68.2 MB/s
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:42.043   16:54:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:42.043   16:54:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:42.299    16:54:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:42.299   16:54:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:42.556    16:54:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:42.557   16:54:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:42.557    16:54:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:42.557    16:54:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:42.557     16:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:42.814    16:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:42.814     16:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:42.814     16:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:42.814    16:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:42.814     16:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:42.814     16:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:42.814     16:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:42.814    16:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:42.814    16:54:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:42.814   16:54:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:42.814   16:54:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:42.814   16:54:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:42.814   16:54:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:43.071   16:54:05 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:43.638  [2024-12-09 16:54:06.523021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:43.638  [2024-12-09 16:54:06.600838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:43.638  [2024-12-09 16:54:06.600878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:43.895  [2024-12-09 16:54:06.702757] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:43.895  [2024-12-09 16:54:06.702824] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:46.422  spdk_app_start Round 2
00:04:46.422  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:46.422   16:54:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:46.422   16:54:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:04:46.422   16:54:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59672 /var/tmp/spdk-nbd.sock
00:04:46.422   16:54:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59672 ']'
00:04:46.422   16:54:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:46.422   16:54:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:46.422   16:54:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:46.422   16:54:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:46.422   16:54:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:46.422   16:54:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:46.422   16:54:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:46.422   16:54:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:46.422  Malloc0
00:04:46.422   16:54:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:46.680  Malloc1
00:04:46.680   16:54:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:46.680   16:54:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:46.939  /dev/nbd0
00:04:46.939    16:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:46.939   16:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:46.939  1+0 records in
00:04:46.939  1+0 records out
00:04:46.939  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198221 s, 20.7 MB/s
00:04:46.939    16:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:46.939   16:54:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:46.939   16:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:46.939   16:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:46.939   16:54:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:47.197  /dev/nbd1
00:04:47.197    16:54:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:47.197   16:54:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:47.197  1+0 records in
00:04:47.197  1+0 records out
00:04:47.197  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299687 s, 13.7 MB/s
00:04:47.197    16:54:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:47.197   16:54:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:47.197   16:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:47.197   16:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:47.197    16:54:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:47.197    16:54:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:47.197     16:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:47.455    16:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:47.455    {
00:04:47.455      "nbd_device": "/dev/nbd0",
00:04:47.455      "bdev_name": "Malloc0"
00:04:47.455    },
00:04:47.455    {
00:04:47.455      "nbd_device": "/dev/nbd1",
00:04:47.455      "bdev_name": "Malloc1"
00:04:47.455    }
00:04:47.455  ]'
00:04:47.455     16:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:47.455    {
00:04:47.455      "nbd_device": "/dev/nbd0",
00:04:47.455      "bdev_name": "Malloc0"
00:04:47.455    },
00:04:47.455    {
00:04:47.455      "nbd_device": "/dev/nbd1",
00:04:47.455      "bdev_name": "Malloc1"
00:04:47.455    }
00:04:47.455  ]'
00:04:47.455     16:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:47.455    16:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:47.455  /dev/nbd1'
00:04:47.455     16:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:47.455  /dev/nbd1'
00:04:47.455     16:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:47.455    16:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:47.455    16:54:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:47.455   16:54:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:47.455   16:54:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:47.455   16:54:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:47.455   16:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:47.455   16:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:47.455   16:54:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:47.456  256+0 records in
00:04:47.456  256+0 records out
00:04:47.456  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00937935 s, 112 MB/s
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:47.456  256+0 records in
00:04:47.456  256+0 records out
00:04:47.456  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149611 s, 70.1 MB/s
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:47.456  256+0 records in
00:04:47.456  256+0 records out
00:04:47.456  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125773 s, 83.4 MB/s
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:47.456   16:54:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:47.713    16:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:47.713   16:54:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:47.971    16:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:47.971   16:54:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:47.971    16:54:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:47.971    16:54:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:47.971     16:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:48.229    16:54:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:48.229     16:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:48.229     16:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:48.229    16:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:48.229     16:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:48.229     16:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:48.229     16:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:48.229    16:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:48.229    16:54:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:48.229   16:54:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:48.229   16:54:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:48.229   16:54:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:48.229   16:54:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:48.578   16:54:11 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:49.163  [2024-12-09 16:54:11.932229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:49.163  [2024-12-09 16:54:12.013698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:04:49.163  [2024-12-09 16:54:12.013702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:49.163  [2024-12-09 16:54:12.113459] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:49.163  [2024-12-09 16:54:12.113522] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:51.691  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:51.691   16:54:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59672 /var/tmp/spdk-nbd.sock
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59672 ']'
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:51.691   16:54:14 event.app_repeat -- event/event.sh@39 -- # killprocess 59672
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59672 ']'
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59672
00:04:51.691    16:54:14 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:51.691    16:54:14 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59672
00:04:51.691  killing process with pid 59672
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59672'
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59672
00:04:51.691   16:54:14 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59672
00:04:52.255  spdk_app_start is called in Round 0.
00:04:52.255  Shutdown signal received, stop current app iteration
00:04:52.255  Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 reinitialization...
00:04:52.255  spdk_app_start is called in Round 1.
00:04:52.255  Shutdown signal received, stop current app iteration
00:04:52.255  Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 reinitialization...
00:04:52.255  spdk_app_start is called in Round 2.
00:04:52.255  Shutdown signal received, stop current app iteration
00:04:52.255  Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 reinitialization...
00:04:52.255  spdk_app_start is called in Round 3.
00:04:52.255  Shutdown signal received, stop current app iteration
00:04:52.255  ************************************
00:04:52.255  END TEST app_repeat
00:04:52.255  ************************************
00:04:52.255   16:54:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:04:52.255   16:54:15 event.app_repeat -- event/event.sh@42 -- # return 0
00:04:52.255  
00:04:52.255  real	0m17.823s
00:04:52.255  user	0m39.078s
00:04:52.255  sys	0m2.116s
00:04:52.255   16:54:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:52.255   16:54:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:52.255   16:54:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:04:52.255   16:54:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:04:52.255   16:54:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:52.255   16:54:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:52.255   16:54:15 event -- common/autotest_common.sh@10 -- # set +x
00:04:52.255  ************************************
00:04:52.255  START TEST cpu_locks
00:04:52.255  ************************************
00:04:52.255   16:54:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:04:52.255  * Looking for test storage...
00:04:52.255  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:04:52.255    16:54:15 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:52.255     16:54:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:52.255     16:54:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:04:52.514    16:54:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:52.514     16:54:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:52.514    16:54:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:04:52.514    16:54:15 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:52.514    16:54:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:52.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.514  		--rc genhtml_branch_coverage=1
00:04:52.514  		--rc genhtml_function_coverage=1
00:04:52.514  		--rc genhtml_legend=1
00:04:52.514  		--rc geninfo_all_blocks=1
00:04:52.514  		--rc geninfo_unexecuted_blocks=1
00:04:52.514  		
00:04:52.514  		'
00:04:52.514    16:54:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:52.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.514  		--rc genhtml_branch_coverage=1
00:04:52.514  		--rc genhtml_function_coverage=1
00:04:52.514  		--rc genhtml_legend=1
00:04:52.514  		--rc geninfo_all_blocks=1
00:04:52.514  		--rc geninfo_unexecuted_blocks=1
00:04:52.514  		
00:04:52.514  		'
00:04:52.514    16:54:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:52.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.514  		--rc genhtml_branch_coverage=1
00:04:52.514  		--rc genhtml_function_coverage=1
00:04:52.514  		--rc genhtml_legend=1
00:04:52.514  		--rc geninfo_all_blocks=1
00:04:52.514  		--rc geninfo_unexecuted_blocks=1
00:04:52.514  		
00:04:52.514  		'
00:04:52.514    16:54:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:52.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.514  		--rc genhtml_branch_coverage=1
00:04:52.514  		--rc genhtml_function_coverage=1
00:04:52.514  		--rc genhtml_legend=1
00:04:52.514  		--rc geninfo_all_blocks=1
00:04:52.514  		--rc geninfo_unexecuted_blocks=1
00:04:52.514  		
00:04:52.514  		'
00:04:52.514   16:54:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:04:52.514   16:54:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:04:52.514   16:54:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:04:52.514   16:54:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:04:52.514   16:54:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:52.514   16:54:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:52.514   16:54:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:04:52.514  ************************************
00:04:52.514  START TEST default_locks
00:04:52.514  ************************************
00:04:52.514  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60097
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60097
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60097 ']'
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:52.514   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:52.515   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:52.515   16:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:04:52.515   16:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:52.515  [2024-12-09 16:54:15.421624] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:52.515  [2024-12-09 16:54:15.421724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ]
00:04:52.775  [2024-12-09 16:54:15.572182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:52.775  [2024-12-09 16:54:15.673593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:53.341   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:53.341   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:04:53.341   16:54:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60097
00:04:53.341   16:54:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:04:53.341   16:54:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60097
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60097
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60097 ']'
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60097
00:04:53.613    16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:53.613    16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60097
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:53.613  killing process with pid 60097
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60097'
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60097
00:04:53.613   16:54:16 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60097
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60097
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60097
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:55.520    16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:04:55.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:55.520  ERROR: process (pid: 60097) is no longer running
00:04:55.520  ************************************
00:04:55.520  END TEST default_locks
00:04:55.520  ************************************
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60097
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60097 ']'
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:04:55.520  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60097) - No such process
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:04:55.520  
00:04:55.520  real	0m2.733s
00:04:55.520  user	0m2.740s
00:04:55.520  sys	0m0.434s
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:55.520   16:54:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:04:55.520   16:54:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:04:55.520   16:54:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:55.520   16:54:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:55.520   16:54:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:04:55.520  ************************************
00:04:55.520  START TEST default_locks_via_rpc
00:04:55.520  ************************************
00:04:55.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60161
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60161
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60161 ']'
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:55.520   16:54:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:55.520  [2024-12-09 16:54:18.223573] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:55.520  [2024-12-09 16:54:18.223698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60161 ]
00:04:55.520  [2024-12-09 16:54:18.379087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:55.520  [2024-12-09 16:54:18.487019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:56.087   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:56.087   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60161
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60161
00:04:56.088   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60161
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60161 ']'
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60161
00:04:56.345    16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:56.345    16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60161
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:56.345  killing process with pid 60161
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60161'
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60161
00:04:56.345   16:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60161
00:04:58.248  ************************************
00:04:58.248  END TEST default_locks_via_rpc
00:04:58.248  ************************************
00:04:58.248  
00:04:58.248  real	0m2.670s
00:04:58.248  user	0m2.678s
00:04:58.248  sys	0m0.437s
00:04:58.248   16:54:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:58.248   16:54:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:58.248   16:54:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:04:58.248   16:54:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:58.248   16:54:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:58.248   16:54:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:04:58.248  ************************************
00:04:58.248  START TEST non_locking_app_on_locked_coremask
00:04:58.248  ************************************
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60224
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60224 /var/tmp/spdk.sock
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60224 ']'
00:04:58.248  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:04:58.248   16:54:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:58.248  [2024-12-09 16:54:20.952354] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:58.248  [2024-12-09 16:54:20.952478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60224 ]
00:04:58.248  [2024-12-09 16:54:21.109448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:58.248  [2024-12-09 16:54:21.210094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:58.887  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60240
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60240 /var/tmp/spdk2.sock
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60240 ']'
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:58.887   16:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:04:58.887  [2024-12-09 16:54:21.878095] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:04:58.887  [2024-12-09 16:54:21.878411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60240 ]
00:04:59.147  [2024-12-09 16:54:22.053096] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:04:59.147  [2024-12-09 16:54:22.053152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:59.408  [2024-12-09 16:54:22.259166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60224
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60224
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60224
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60224 ']'
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60224
00:05:00.792    16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:00.792    16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60224
00:05:00.792  killing process with pid 60224
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60224'
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60224
00:05:00.792   16:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60224
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60240
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60240 ']'
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60240
00:05:04.075    16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:04.075    16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60240
00:05:04.075  killing process with pid 60240
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60240'
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60240
00:05:04.075   16:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60240
00:05:05.016  
00:05:05.016  real	0m6.958s
00:05:05.016  user	0m7.207s
00:05:05.016  sys	0m0.802s
00:05:05.016   16:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:05.016   16:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:05.016  ************************************
00:05:05.016  END TEST non_locking_app_on_locked_coremask
00:05:05.016  ************************************
00:05:05.016   16:54:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:05:05.016   16:54:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.017   16:54:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.017   16:54:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:05.017  ************************************
00:05:05.017  START TEST locking_app_on_unlocked_coremask
00:05:05.017  ************************************
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60337
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60337 /var/tmp/spdk.sock
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60337 ']'
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:05.017  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:05.017   16:54:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:05.017  [2024-12-09 16:54:27.971282] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:05.017  [2024-12-09 16:54:27.971397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60337 ]
00:05:05.278  [2024-12-09 16:54:28.132242] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:05.278  [2024-12-09 16:54:28.132306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:05.278  [2024-12-09 16:54:28.234574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:05.848  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60347
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60347 /var/tmp/spdk2.sock
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60347 ']'
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:05.848   16:54:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:06.108  [2024-12-09 16:54:28.895893] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:06.108  [2024-12-09 16:54:28.896148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60347 ]
00:05:06.108  [2024-12-09 16:54:29.069215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:06.366  [2024-12-09 16:54:29.274681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60347
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60347
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60337
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60337 ']'
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60337
00:05:07.752    16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:07.752    16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60337
00:05:07.752  killing process with pid 60337
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60337'
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60337
00:05:07.752   16:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60337
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60347
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60347 ']'
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60347
00:05:11.051    16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:11.051    16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60347
00:05:11.051  killing process with pid 60347
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60347'
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60347
00:05:11.051   16:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60347
00:05:12.450  ************************************
00:05:12.450  END TEST locking_app_on_unlocked_coremask
00:05:12.450  ************************************
00:05:12.450  
00:05:12.450  real	0m7.486s
00:05:12.450  user	0m7.728s
00:05:12.450  sys	0m0.844s
00:05:12.450   16:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:12.450   16:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:12.451   16:54:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:05:12.451   16:54:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:12.451   16:54:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:12.451   16:54:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:12.451  ************************************
00:05:12.451  START TEST locking_app_on_locked_coremask
00:05:12.451  ************************************
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60460
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60460 /var/tmp/spdk.sock
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60460 ']'
00:05:12.451  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:12.451   16:54:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:12.712  [2024-12-09 16:54:35.514661] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:12.712  [2024-12-09 16:54:35.514774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60460 ]
00:05:12.712  [2024-12-09 16:54:35.672804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:12.977  [2024-12-09 16:54:35.775604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60476
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60476 /var/tmp/spdk2.sock
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60476 /var/tmp/spdk2.sock
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:13.548    16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:13.548  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60476 /var/tmp/spdk2.sock
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60476 ']'
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:13.548   16:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:13.548  [2024-12-09 16:54:36.445785] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:13.548  [2024-12-09 16:54:36.445918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60476 ]
00:05:13.809  [2024-12-09 16:54:36.618463] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60460 has claimed it.
00:05:13.809  [2024-12-09 16:54:36.618527] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:14.070  ERROR: process (pid: 60476) is no longer running
00:05:14.070  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60476) - No such process
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60460
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:14.070   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60460
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60460
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60460 ']'
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60460
00:05:14.331    16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:14.331    16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60460
00:05:14.331  killing process with pid 60460
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60460'
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60460
00:05:14.331   16:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60460
00:05:16.247  
00:05:16.247  real	0m3.401s
00:05:16.247  user	0m3.593s
00:05:16.247  sys	0m0.583s
00:05:16.247   16:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:16.247  ************************************
00:05:16.247  END TEST locking_app_on_locked_coremask
00:05:16.247  ************************************
00:05:16.247   16:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:16.247   16:54:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:05:16.247   16:54:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:16.247   16:54:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:16.247   16:54:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:16.247  ************************************
00:05:16.247  START TEST locking_overlapped_coremask
00:05:16.247  ************************************
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60529
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60529 /var/tmp/spdk.sock
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60529 ']'
00:05:16.247  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:16.247   16:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:16.247  [2024-12-09 16:54:39.006488] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:16.247  [2024-12-09 16:54:39.007084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ]
00:05:16.247  [2024-12-09 16:54:39.184409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:16.508  [2024-12-09 16:54:39.288906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:16.508  [2024-12-09 16:54:39.289401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:16.508  [2024-12-09 16:54:39.289417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60547
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60547 /var/tmp/spdk2.sock
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60547 /var/tmp/spdk2.sock
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:17.094    16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:17.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60547 /var/tmp/spdk2.sock
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60547 ']'
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:17.094   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:17.095   16:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:17.095  [2024-12-09 16:54:39.962820] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:17.095  [2024-12-09 16:54:39.962946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60547 ]
00:05:17.356  [2024-12-09 16:54:40.139622] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60529 has claimed it.
00:05:17.356  [2024-12-09 16:54:40.143890] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:17.926  ERROR: process (pid: 60547) is no longer running
00:05:17.926  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60547) - No such process
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60529
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60529 ']'
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60529
00:05:17.926    16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:17.926    16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60529
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:17.926  killing process with pid 60529
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60529'
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60529
00:05:17.926   16:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60529
00:05:19.312  
00:05:19.312  real	0m3.356s
00:05:19.312  user	0m9.061s
00:05:19.312  sys	0m0.479s
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.312  ************************************
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:19.312  END TEST locking_overlapped_coremask
00:05:19.312  ************************************
00:05:19.312   16:54:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:05:19.312   16:54:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:19.312   16:54:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.312   16:54:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:19.312  ************************************
00:05:19.312  START TEST locking_overlapped_coremask_via_rpc
00:05:19.312  ************************************
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:05:19.312  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60606
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60606 /var/tmp/spdk.sock
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60606 ']'
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:19.312   16:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:19.574  [2024-12-09 16:54:42.397549] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:19.574  [2024-12-09 16:54:42.397814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ]
00:05:19.574  [2024-12-09 16:54:42.558015] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:19.574  [2024-12-09 16:54:42.558062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:19.835  [2024-12-09 16:54:42.676605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:19.835  [2024-12-09 16:54:42.677623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:19.835  [2024-12-09 16:54:42.677756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60624
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60624 /var/tmp/spdk2.sock
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60624 ']'
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:20.408  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:20.408   16:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:20.408  [2024-12-09 16:54:43.346721] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:20.408  [2024-12-09 16:54:43.347015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60624 ]
00:05:20.669  [2024-12-09 16:54:43.520240] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:20.669  [2024-12-09 16:54:43.524878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:20.930  [2024-12-09 16:54:43.742229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:20.930  [2024-12-09 16:54:43.742473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:20.930  [2024-12-09 16:54:43.742498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:21.911    16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:21.911   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:21.911  [2024-12-09 16:54:44.939007] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60606 has claimed it.
00:05:22.173  request:
00:05:22.173  {
00:05:22.173  "method": "framework_enable_cpumask_locks",
00:05:22.173  "req_id": 1
00:05:22.173  }
00:05:22.173  Got JSON-RPC error response
00:05:22.173  response:
00:05:22.173  {
00:05:22.173  "code": -32603,
00:05:22.173  "message": "Failed to claim CPU core: 2"
00:05:22.173  }
00:05:22.173  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60606 /var/tmp/spdk.sock
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60606 ']'
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:22.173   16:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:22.173  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60624 /var/tmp/spdk2.sock
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60624 ']'
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:22.173   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:22.434  ************************************
00:05:22.434  END TEST locking_overlapped_coremask_via_rpc
00:05:22.434  ************************************
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:22.434  
00:05:22.434  real	0m3.053s
00:05:22.434  user	0m1.080s
00:05:22.434  sys	0m0.122s
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:22.434   16:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:22.434   16:54:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:05:22.434   16:54:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60606 ]]
00:05:22.434   16:54:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60606
00:05:22.434   16:54:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60606 ']'
00:05:22.434   16:54:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60606
00:05:22.435    16:54:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:22.435   16:54:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:22.435    16:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60606
00:05:22.435  killing process with pid 60606
00:05:22.435   16:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:22.435   16:54:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:22.435   16:54:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60606'
00:05:22.435   16:54:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60606
00:05:22.435   16:54:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60606
00:05:24.347   16:54:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60624 ]]
00:05:24.347   16:54:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60624
00:05:24.347   16:54:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60624 ']'
00:05:24.347   16:54:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60624
00:05:24.347    16:54:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:24.347   16:54:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:24.347    16:54:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60624
00:05:24.347  killing process with pid 60624
00:05:24.348   16:54:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:24.348   16:54:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:24.348   16:54:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60624'
00:05:24.348   16:54:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60624
00:05:24.348   16:54:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60624
00:05:25.732   16:54:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:25.732   16:54:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:05:25.732   16:54:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60606 ]]
00:05:25.732   16:54:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60606
00:05:25.732   16:54:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60606 ']'
00:05:25.732   16:54:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60606
00:05:25.732  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60606) - No such process
00:05:25.732  Process with pid 60606 is not found
00:05:25.732  Process with pid 60624 is not found
00:05:25.732   16:54:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60606 is not found'
00:05:25.732   16:54:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60624 ]]
00:05:25.732   16:54:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60624
00:05:25.733   16:54:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60624 ']'
00:05:25.733   16:54:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60624
00:05:25.733  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60624) - No such process
00:05:25.733   16:54:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60624 is not found'
00:05:25.733   16:54:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:25.733  ************************************
00:05:25.733  END TEST cpu_locks
00:05:25.733  ************************************
00:05:25.733  
00:05:25.733  real	0m33.351s
00:05:25.733  user	0m57.417s
00:05:25.733  sys	0m4.517s
00:05:25.733   16:54:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:25.733   16:54:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:25.733  ************************************
00:05:25.733  END TEST event
00:05:25.733  ************************************
00:05:25.733  
00:05:25.733  real	0m59.316s
00:05:25.733  user	1m48.785s
00:05:25.733  sys	0m7.429s
00:05:25.733   16:54:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:25.733   16:54:48 event -- common/autotest_common.sh@10 -- # set +x
00:05:25.733   16:54:48  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:25.733   16:54:48  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:25.733   16:54:48  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:25.733   16:54:48  -- common/autotest_common.sh@10 -- # set +x
00:05:25.733  ************************************
00:05:25.733  START TEST thread
00:05:25.733  ************************************
00:05:25.733   16:54:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:25.733  * Looking for test storage...
00:05:25.733  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:05:25.733    16:54:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:25.733     16:54:48 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:05:25.733     16:54:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:25.992    16:54:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:25.992    16:54:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:25.992    16:54:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:25.992    16:54:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:25.992    16:54:48 thread -- scripts/common.sh@336 -- # IFS=.-:
00:05:25.992    16:54:48 thread -- scripts/common.sh@336 -- # read -ra ver1
00:05:25.992    16:54:48 thread -- scripts/common.sh@337 -- # IFS=.-:
00:05:25.992    16:54:48 thread -- scripts/common.sh@337 -- # read -ra ver2
00:05:25.992    16:54:48 thread -- scripts/common.sh@338 -- # local 'op=<'
00:05:25.992    16:54:48 thread -- scripts/common.sh@340 -- # ver1_l=2
00:05:25.992    16:54:48 thread -- scripts/common.sh@341 -- # ver2_l=1
00:05:25.992    16:54:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:25.992    16:54:48 thread -- scripts/common.sh@344 -- # case "$op" in
00:05:25.992    16:54:48 thread -- scripts/common.sh@345 -- # : 1
00:05:25.992    16:54:48 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:25.992    16:54:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:25.992     16:54:48 thread -- scripts/common.sh@365 -- # decimal 1
00:05:25.992     16:54:48 thread -- scripts/common.sh@353 -- # local d=1
00:05:25.992     16:54:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:25.992     16:54:48 thread -- scripts/common.sh@355 -- # echo 1
00:05:25.992    16:54:48 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:05:25.992     16:54:48 thread -- scripts/common.sh@366 -- # decimal 2
00:05:25.992     16:54:48 thread -- scripts/common.sh@353 -- # local d=2
00:05:25.992     16:54:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:25.992     16:54:48 thread -- scripts/common.sh@355 -- # echo 2
00:05:25.992    16:54:48 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:05:25.992    16:54:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:25.992    16:54:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:25.992    16:54:48 thread -- scripts/common.sh@368 -- # return 0
00:05:25.992    16:54:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:25.992    16:54:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:25.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:25.992  		--rc genhtml_branch_coverage=1
00:05:25.992  		--rc genhtml_function_coverage=1
00:05:25.992  		--rc genhtml_legend=1
00:05:25.992  		--rc geninfo_all_blocks=1
00:05:25.992  		--rc geninfo_unexecuted_blocks=1
00:05:25.992  		
00:05:25.992  		'
00:05:25.992    16:54:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:25.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:25.992  		--rc genhtml_branch_coverage=1
00:05:25.992  		--rc genhtml_function_coverage=1
00:05:25.992  		--rc genhtml_legend=1
00:05:25.992  		--rc geninfo_all_blocks=1
00:05:25.992  		--rc geninfo_unexecuted_blocks=1
00:05:25.992  		
00:05:25.992  		'
00:05:25.992    16:54:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:25.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:25.992  		--rc genhtml_branch_coverage=1
00:05:25.992  		--rc genhtml_function_coverage=1
00:05:25.992  		--rc genhtml_legend=1
00:05:25.992  		--rc geninfo_all_blocks=1
00:05:25.992  		--rc geninfo_unexecuted_blocks=1
00:05:25.992  		
00:05:25.992  		'
00:05:25.992    16:54:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:25.992  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:25.992  		--rc genhtml_branch_coverage=1
00:05:25.992  		--rc genhtml_function_coverage=1
00:05:25.992  		--rc genhtml_legend=1
00:05:25.992  		--rc geninfo_all_blocks=1
00:05:25.992  		--rc geninfo_unexecuted_blocks=1
00:05:25.992  		
00:05:25.992  		'
00:05:25.992   16:54:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:25.992   16:54:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:25.992   16:54:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:25.992   16:54:48 thread -- common/autotest_common.sh@10 -- # set +x
00:05:25.992  ************************************
00:05:25.992  START TEST thread_poller_perf
00:05:25.992  ************************************
00:05:25.992   16:54:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:25.992  [2024-12-09 16:54:48.825183] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:25.992  [2024-12-09 16:54:48.825298] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60790 ]
00:05:25.992  [2024-12-09 16:54:48.984198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:26.251  [2024-12-09 16:54:49.091208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:26.251  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:05:27.269  
[2024-12-09T16:54:50.310Z]  ======================================
00:05:27.269  
[2024-12-09T16:54:50.310Z]  busy:2609942500 (cyc)
00:05:27.269  
[2024-12-09T16:54:50.310Z]  total_run_count: 305000
00:05:27.269  
[2024-12-09T16:54:50.310Z]  tsc_hz: 2600000000 (cyc)
00:05:27.269  
[2024-12-09T16:54:50.310Z]  ======================================
00:05:27.269  
[2024-12-09T16:54:50.310Z]  poller_cost: 8557 (cyc), 3291 (nsec)
00:05:27.269  ************************************
00:05:27.269  END TEST thread_poller_perf
00:05:27.269  ************************************
00:05:27.269  
00:05:27.269  real	0m1.459s
00:05:27.269  user	0m1.283s
00:05:27.269  sys	0m0.068s
00:05:27.269   16:54:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:27.269   16:54:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:27.269   16:54:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:27.269   16:54:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:27.269   16:54:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:27.269   16:54:50 thread -- common/autotest_common.sh@10 -- # set +x
00:05:27.529  ************************************
00:05:27.530  START TEST thread_poller_perf
00:05:27.530  ************************************
00:05:27.530   16:54:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:27.530  [2024-12-09 16:54:50.342379] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:27.530  [2024-12-09 16:54:50.342491] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ]
00:05:27.530  [2024-12-09 16:54:50.504859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:27.791  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:05:27.791  [2024-12-09 16:54:50.613839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:28.733  
[2024-12-09T16:54:51.774Z]  ======================================
00:05:28.733  
[2024-12-09T16:54:51.774Z]  busy:2603644072 (cyc)
00:05:28.733  
[2024-12-09T16:54:51.774Z]  total_run_count: 3643000
00:05:28.733  
[2024-12-09T16:54:51.774Z]  tsc_hz: 2600000000 (cyc)
00:05:28.733  
[2024-12-09T16:54:51.774Z]  ======================================
00:05:28.733  
[2024-12-09T16:54:51.774Z]  poller_cost: 714 (cyc), 274 (nsec)
00:05:28.993  
00:05:28.993  real	0m1.463s
00:05:28.993  user	0m1.287s
00:05:28.993  sys	0m0.069s
00:05:28.993   16:54:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:28.993  ************************************
00:05:28.993  END TEST thread_poller_perf
00:05:28.993  ************************************
00:05:28.993   16:54:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:28.993   16:54:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:05:28.993  
00:05:28.993  real	0m3.183s
00:05:28.993  user	0m2.664s
00:05:28.993  sys	0m0.266s
00:05:28.993  ************************************
00:05:28.993  END TEST thread
00:05:28.993  ************************************
00:05:28.993   16:54:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:28.993   16:54:51 thread -- common/autotest_common.sh@10 -- # set +x
00:05:28.993   16:54:51  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:05:28.993   16:54:51  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:05:28.993   16:54:51  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:28.993   16:54:51  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:28.993   16:54:51  -- common/autotest_common.sh@10 -- # set +x
00:05:28.993  ************************************
00:05:28.993  START TEST app_cmdline
00:05:28.993  ************************************
00:05:28.993   16:54:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:05:28.993  * Looking for test storage...
00:05:28.993  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:05:28.993    16:54:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:28.993     16:54:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:28.993     16:54:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:05:28.993    16:54:52 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@345 -- # : 1
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:28.993     16:54:52 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:28.993    16:54:52 app_cmdline -- scripts/common.sh@368 -- # return 0
00:05:28.993    16:54:52 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:28.993    16:54:52 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:28.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.993  		--rc genhtml_branch_coverage=1
00:05:28.993  		--rc genhtml_function_coverage=1
00:05:28.993  		--rc genhtml_legend=1
00:05:28.993  		--rc geninfo_all_blocks=1
00:05:28.993  		--rc geninfo_unexecuted_blocks=1
00:05:28.993  		
00:05:28.993  		'
00:05:28.993    16:54:52 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:28.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.993  		--rc genhtml_branch_coverage=1
00:05:28.993  		--rc genhtml_function_coverage=1
00:05:28.993  		--rc genhtml_legend=1
00:05:28.993  		--rc geninfo_all_blocks=1
00:05:28.993  		--rc geninfo_unexecuted_blocks=1
00:05:28.993  		
00:05:28.993  		'
00:05:28.993    16:54:52 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:28.993  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.994  		--rc genhtml_branch_coverage=1
00:05:28.994  		--rc genhtml_function_coverage=1
00:05:28.994  		--rc genhtml_legend=1
00:05:28.994  		--rc geninfo_all_blocks=1
00:05:28.994  		--rc geninfo_unexecuted_blocks=1
00:05:28.994  		
00:05:28.994  		'
00:05:28.994    16:54:52 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:28.994  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:28.994  		--rc genhtml_branch_coverage=1
00:05:28.994  		--rc genhtml_function_coverage=1
00:05:28.994  		--rc genhtml_legend=1
00:05:28.994  		--rc geninfo_all_blocks=1
00:05:28.994  		--rc geninfo_unexecuted_blocks=1
00:05:28.994  		
00:05:28.994  		'
00:05:28.994   16:54:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:05:28.994   16:54:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60911
00:05:28.994   16:54:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60911
00:05:28.994   16:54:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60911 ']'
00:05:28.994   16:54:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:28.994   16:54:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:28.994  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:28.994   16:54:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:28.994   16:54:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:28.994   16:54:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:05:28.994   16:54:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:29.253  [2024-12-09 16:54:52.105549] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:29.253  [2024-12-09 16:54:52.105658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ]
00:05:29.253  [2024-12-09 16:54:52.267370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:29.527  [2024-12-09 16:54:52.371869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:30.098   16:54:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:30.098   16:54:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:05:30.098   16:54:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:05:30.359  {
00:05:30.359    "version": "SPDK v25.01-pre git sha1 9237e57ed",
00:05:30.359    "fields": {
00:05:30.359      "major": 25,
00:05:30.359      "minor": 1,
00:05:30.359      "patch": 0,
00:05:30.359      "suffix": "-pre",
00:05:30.359      "commit": "9237e57ed"
00:05:30.359    }
00:05:30.359  }
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:05:30.359    16:54:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:05:30.359    16:54:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:05:30.359    16:54:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:30.359    16:54:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:30.359    16:54:53 app_cmdline -- app/cmdline.sh@26 -- # sort
00:05:30.359    16:54:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:05:30.359   16:54:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:30.359    16:54:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:30.359    16:54:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:05:30.359   16:54:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:30.619  request:
00:05:30.619  {
00:05:30.619    "method": "env_dpdk_get_mem_stats",
00:05:30.619    "req_id": 1
00:05:30.619  }
00:05:30.619  Got JSON-RPC error response
00:05:30.619  response:
00:05:30.619  {
00:05:30.619    "code": -32601,
00:05:30.619    "message": "Method not found"
00:05:30.619  }
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:30.619   16:54:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60911
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60911 ']'
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60911
00:05:30.619    16:54:53 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:30.619    16:54:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60911
00:05:30.619  killing process with pid 60911
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60911'
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 60911
00:05:30.619   16:54:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 60911
00:05:31.999  ************************************
00:05:31.999  END TEST app_cmdline
00:05:31.999  ************************************
00:05:31.999  
00:05:31.999  real	0m3.087s
00:05:31.999  user	0m3.399s
00:05:31.999  sys	0m0.437s
00:05:31.999   16:54:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:31.999   16:54:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:32.285   16:54:55  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:05:32.285   16:54:55  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:32.285   16:54:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:32.285   16:54:55  -- common/autotest_common.sh@10 -- # set +x
00:05:32.285  ************************************
00:05:32.285  START TEST version
00:05:32.285  ************************************
00:05:32.285   16:54:55 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:05:32.285  * Looking for test storage...
00:05:32.285  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:32.285     16:54:55 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:32.285     16:54:55 version -- common/autotest_common.sh@1711 -- # lcov --version
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:32.285    16:54:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:32.285    16:54:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:32.285    16:54:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:32.285    16:54:55 version -- scripts/common.sh@336 -- # IFS=.-:
00:05:32.285    16:54:55 version -- scripts/common.sh@336 -- # read -ra ver1
00:05:32.285    16:54:55 version -- scripts/common.sh@337 -- # IFS=.-:
00:05:32.285    16:54:55 version -- scripts/common.sh@337 -- # read -ra ver2
00:05:32.285    16:54:55 version -- scripts/common.sh@338 -- # local 'op=<'
00:05:32.285    16:54:55 version -- scripts/common.sh@340 -- # ver1_l=2
00:05:32.285    16:54:55 version -- scripts/common.sh@341 -- # ver2_l=1
00:05:32.285    16:54:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:32.285    16:54:55 version -- scripts/common.sh@344 -- # case "$op" in
00:05:32.285    16:54:55 version -- scripts/common.sh@345 -- # : 1
00:05:32.285    16:54:55 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:32.285    16:54:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:32.285     16:54:55 version -- scripts/common.sh@365 -- # decimal 1
00:05:32.285     16:54:55 version -- scripts/common.sh@353 -- # local d=1
00:05:32.285     16:54:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:32.285     16:54:55 version -- scripts/common.sh@355 -- # echo 1
00:05:32.285    16:54:55 version -- scripts/common.sh@365 -- # ver1[v]=1
00:05:32.285     16:54:55 version -- scripts/common.sh@366 -- # decimal 2
00:05:32.285     16:54:55 version -- scripts/common.sh@353 -- # local d=2
00:05:32.285     16:54:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:32.285     16:54:55 version -- scripts/common.sh@355 -- # echo 2
00:05:32.285    16:54:55 version -- scripts/common.sh@366 -- # ver2[v]=2
00:05:32.285    16:54:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:32.285    16:54:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:32.285    16:54:55 version -- scripts/common.sh@368 -- # return 0
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:32.285  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.285  		--rc genhtml_branch_coverage=1
00:05:32.285  		--rc genhtml_function_coverage=1
00:05:32.285  		--rc genhtml_legend=1
00:05:32.285  		--rc geninfo_all_blocks=1
00:05:32.285  		--rc geninfo_unexecuted_blocks=1
00:05:32.285  		
00:05:32.285  		'
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:32.285  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.285  		--rc genhtml_branch_coverage=1
00:05:32.285  		--rc genhtml_function_coverage=1
00:05:32.285  		--rc genhtml_legend=1
00:05:32.285  		--rc geninfo_all_blocks=1
00:05:32.285  		--rc geninfo_unexecuted_blocks=1
00:05:32.285  		
00:05:32.285  		'
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:32.285  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.285  		--rc genhtml_branch_coverage=1
00:05:32.285  		--rc genhtml_function_coverage=1
00:05:32.285  		--rc genhtml_legend=1
00:05:32.285  		--rc geninfo_all_blocks=1
00:05:32.285  		--rc geninfo_unexecuted_blocks=1
00:05:32.285  		
00:05:32.285  		'
00:05:32.285    16:54:55 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:32.285  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.285  		--rc genhtml_branch_coverage=1
00:05:32.285  		--rc genhtml_function_coverage=1
00:05:32.285  		--rc genhtml_legend=1
00:05:32.285  		--rc geninfo_all_blocks=1
00:05:32.285  		--rc geninfo_unexecuted_blocks=1
00:05:32.285  		
00:05:32.285  		'
00:05:32.285    16:54:55 version -- app/version.sh@17 -- # get_header_version major
00:05:32.285    16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # cut -f2
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # tr -d '"'
00:05:32.285   16:54:55 version -- app/version.sh@17 -- # major=25
00:05:32.285    16:54:55 version -- app/version.sh@18 -- # get_header_version minor
00:05:32.285    16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # cut -f2
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # tr -d '"'
00:05:32.285   16:54:55 version -- app/version.sh@18 -- # minor=1
00:05:32.285    16:54:55 version -- app/version.sh@19 -- # get_header_version patch
00:05:32.285    16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # cut -f2
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # tr -d '"'
00:05:32.285   16:54:55 version -- app/version.sh@19 -- # patch=0
00:05:32.285    16:54:55 version -- app/version.sh@20 -- # get_header_version suffix
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # cut -f2
00:05:32.285    16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:32.285    16:54:55 version -- app/version.sh@14 -- # tr -d '"'
00:05:32.285   16:54:55 version -- app/version.sh@20 -- # suffix=-pre
00:05:32.285   16:54:55 version -- app/version.sh@22 -- # version=25.1
00:05:32.285   16:54:55 version -- app/version.sh@25 -- # (( patch != 0 ))
00:05:32.285   16:54:55 version -- app/version.sh@28 -- # version=25.1rc0
00:05:32.285   16:54:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:32.285    16:54:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:05:32.285   16:54:55 version -- app/version.sh@30 -- # py_version=25.1rc0
00:05:32.285   16:54:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:05:32.285  
00:05:32.285  real	0m0.206s
00:05:32.285  user	0m0.124s
00:05:32.285  sys	0m0.108s
00:05:32.285  ************************************
00:05:32.285  END TEST version
00:05:32.285  ************************************
00:05:32.285   16:54:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:32.285   16:54:55 version -- common/autotest_common.sh@10 -- # set +x
00:05:32.285   16:54:55  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:05:32.285   16:54:55  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:05:32.285    16:54:55  -- spdk/autotest.sh@194 -- # uname -s
00:05:32.285   16:54:55  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:05:32.286   16:54:55  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:32.286   16:54:55  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:32.286   16:54:55  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:05:32.286   16:54:55  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:05:32.286   16:54:55  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:32.286   16:54:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:32.286   16:54:55  -- common/autotest_common.sh@10 -- # set +x
00:05:32.286  ************************************
00:05:32.286  START TEST blockdev_nvme
00:05:32.286  ************************************
00:05:32.286   16:54:55 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:05:32.546  * Looking for test storage...
00:05:32.546  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:32.546     16:54:55 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:05:32.546     16:54:55 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:32.546     16:54:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:32.546    16:54:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:32.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.546  		--rc genhtml_branch_coverage=1
00:05:32.546  		--rc genhtml_function_coverage=1
00:05:32.546  		--rc genhtml_legend=1
00:05:32.546  		--rc geninfo_all_blocks=1
00:05:32.546  		--rc geninfo_unexecuted_blocks=1
00:05:32.546  		
00:05:32.546  		'
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:32.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.546  		--rc genhtml_branch_coverage=1
00:05:32.546  		--rc genhtml_function_coverage=1
00:05:32.546  		--rc genhtml_legend=1
00:05:32.546  		--rc geninfo_all_blocks=1
00:05:32.546  		--rc geninfo_unexecuted_blocks=1
00:05:32.546  		
00:05:32.546  		'
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:32.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.546  		--rc genhtml_branch_coverage=1
00:05:32.546  		--rc genhtml_function_coverage=1
00:05:32.546  		--rc genhtml_legend=1
00:05:32.546  		--rc geninfo_all_blocks=1
00:05:32.546  		--rc geninfo_unexecuted_blocks=1
00:05:32.546  		
00:05:32.546  		'
00:05:32.546    16:54:55 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:32.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:32.546  		--rc genhtml_branch_coverage=1
00:05:32.546  		--rc genhtml_function_coverage=1
00:05:32.546  		--rc genhtml_legend=1
00:05:32.546  		--rc geninfo_all_blocks=1
00:05:32.546  		--rc geninfo_unexecuted_blocks=1
00:05:32.546  		
00:05:32.546  		'
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:05:32.546    16:54:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:05:32.546    16:54:55 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme
00:05:32.546   16:54:55 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device=
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek=
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx=
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]]
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]]
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61089
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:05:32.547   16:54:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61089
00:05:32.547   16:54:55 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61089 ']'
00:05:32.547   16:54:55 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:32.547   16:54:55 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:32.547   16:54:55 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:32.547  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:32.547   16:54:55 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:32.547   16:54:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:32.547  [2024-12-09 16:54:55.542444] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:32.547  [2024-12-09 16:54:55.542719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61089 ]
00:05:32.807  [2024-12-09 16:54:55.702584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:32.807  [2024-12-09 16:54:55.801934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:33.380   16:54:56 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:33.380   16:54:56 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:05:33.380   16:54:56 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:05:33.380   16:54:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf
00:05:33.380   16:54:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:05:33.380   16:54:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:05:33.380    16:54:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:33.642   16:54:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\'''
00:05:33.642   16:54:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:33.642   16:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:33.903   16:54:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:33.903   16:54:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:05:33.903   16:54:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:33.903   16:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:33.903   16:54:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:33.903   16:54:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat
00:05:33.903    16:54:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:33.903    16:54:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:33.903    16:54:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:33.903   16:54:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:05:33.903    16:54:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:33.903    16:54:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:05:33.903    16:54:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:33.903   16:54:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:05:33.903    16:54:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name
00:05:33.904    16:54:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "9c734192-d2b4-48c4-aa23-24930a021f1a"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "9c734192-d2b4-48c4-aa23-24930a021f1a",' '  "numa_id": -1,' '  "md_size": 64,' '  "md_interleave": false,' '  "dif_type": 0,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": true,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1",' '  "aliases": [' '    "c0f5186f-4bdb-4e74-8ae8-cc8aef80e12d"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "c0f5186f-4bdb-4e74-8ae8-cc8aef80e12d",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:11.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:11.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12341",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12341",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n1",' '  "aliases": [' '    "2a7104b7-3c43-46c0-9497-e40bd1121a60"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "2a7104b7-3c43-46c0-9497-e40bd1121a60",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n2",' '  "aliases": [' '    "18608a24-ae0b-4c51-880b-d673759f6b8b"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "18608a24-ae0b-4c51-880b-d673759f6b8b",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 2,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n3",' '  "aliases": [' '    "75cc88e3-882e-4281-929d-e81e5d3138b2"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "75cc88e3-882e-4281-929d-e81e5d3138b2",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 3,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme3n1",' '  "aliases": [' '    "d45d7dbd-3805-471c-b188-6fd0f723ce81"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "d45d7dbd-3805-471c-b188-6fd0f723ce81",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:13.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:13.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12343",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": true,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": true' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:05:33.904   16:54:56 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:05:33.904   16:54:56 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1
00:05:33.904   16:54:56 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:05:33.904   16:54:56 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61089
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61089 ']'
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61089
00:05:33.904    16:54:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:33.904    16:54:56 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61089
00:05:33.904  killing process with pid 61089
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61089'
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61089
00:05:33.904   16:54:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61089
00:05:35.818   16:54:58 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:05:35.818   16:54:58 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:05:35.818   16:54:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:05:35.818   16:54:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:35.818   16:54:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:35.818  ************************************
00:05:35.818  START TEST bdev_hello_world
00:05:35.818  ************************************
00:05:35.818   16:54:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:05:35.818  [2024-12-09 16:54:58.519461] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:35.818  [2024-12-09 16:54:58.519714] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ]
00:05:35.818  [2024-12-09 16:54:58.678088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:35.818  [2024-12-09 16:54:58.780228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:36.387  [2024-12-09 16:54:59.323737] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:05:36.387  [2024-12-09 16:54:59.323997] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:05:36.387  [2024-12-09 16:54:59.324029] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:05:36.387  [2024-12-09 16:54:59.326475] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:05:36.387  [2024-12-09 16:54:59.327651] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:05:36.387  [2024-12-09 16:54:59.327754] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:05:36.387  [2024-12-09 16:54:59.328277] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:05:36.387  
00:05:36.387  [2024-12-09 16:54:59.328304] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:05:37.319  
00:05:37.319  real	0m1.603s
00:05:37.319  user	0m1.306s
00:05:37.319  sys	0m0.187s
00:05:37.319   16:55:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:37.319  ************************************
00:05:37.319  END TEST bdev_hello_world
00:05:37.319  ************************************
00:05:37.319   16:55:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:05:37.319   16:55:00 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:05:37.319   16:55:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:37.319   16:55:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:37.319   16:55:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:37.319  ************************************
00:05:37.319  START TEST bdev_bounds
00:05:37.319  ************************************
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61204
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:05:37.319  Process bdevio pid: 61204
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61204'
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61204
00:05:37.319  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61204 ']'
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:37.319   16:55:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:05:37.319  [2024-12-09 16:55:00.152917] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:37.319  [2024-12-09 16:55:00.153057] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61204 ]
00:05:37.319  [2024-12-09 16:55:00.311028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:37.577  [2024-12-09 16:55:00.414252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:37.577  [2024-12-09 16:55:00.414581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:37.577  [2024-12-09 16:55:00.415080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:38.235   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:38.235   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:05:38.235   16:55:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:05:38.235  I/O targets:
00:05:38.235    Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:05:38.235    Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:05:38.235    Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:05:38.235    Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:05:38.235    Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:05:38.235    Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB)
00:05:38.235  
00:05:38.235  
00:05:38.235       CUnit - A unit testing framework for C - Version 2.1-3
00:05:38.235       http://cunit.sourceforge.net/
00:05:38.235  
00:05:38.235  
00:05:38.235  Suite: bdevio tests on: Nvme3n1
00:05:38.235    Test: blockdev write read block ...passed
00:05:38.235    Test: blockdev write zeroes read block ...passed
00:05:38.235    Test: blockdev write zeroes read no split ...passed
00:05:38.235    Test: blockdev write zeroes read split ...passed
00:05:38.235    Test: blockdev write zeroes read split partial ...passed
00:05:38.235    Test: blockdev reset ...[2024-12-09 16:55:01.141604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller
00:05:38.235  [2024-12-09 16:55:01.144510] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful.
00:05:38.235  passed
00:05:38.235    Test: blockdev write read 8 blocks ...passed
00:05:38.235    Test: blockdev write read size > 128k ...passed
00:05:38.236    Test: blockdev write read invalid size ...passed
00:05:38.236    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:38.236    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:38.236    Test: blockdev write read max offset ...passed
00:05:38.236    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:38.236    Test: blockdev writev readv 8 blocks ...passed
00:05:38.236    Test: blockdev writev readv 30 x 1block ...passed
00:05:38.236    Test: blockdev writev readv block ...passed
00:05:38.236    Test: blockdev writev readv size > 128k ...passed
00:05:38.236    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:38.236    Test: blockdev comparev and writev ...[2024-12-09 16:55:01.154422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b740a000 len:0x1000
00:05:38.236  [2024-12-09 16:55:01.155041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:05:38.236  passed
00:05:38.236    Test: blockdev nvme passthru rw ...passed
00:05:38.236    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:01.156566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed
00:05:38.236    Test: blockdev nvme admin passthru ...RP2 0x0
00:05:38.236  [2024-12-09 16:55:01.156735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:05:38.236  passed
00:05:38.236    Test: blockdev copy ...passed
00:05:38.236  Suite: bdevio tests on: Nvme2n3
00:05:38.236    Test: blockdev write read block ...passed
00:05:38.236    Test: blockdev write zeroes read block ...passed
00:05:38.236    Test: blockdev write zeroes read no split ...passed
00:05:38.236    Test: blockdev write zeroes read split ...passed
00:05:38.236    Test: blockdev write zeroes read split partial ...passed
00:05:38.236    Test: blockdev reset ...[2024-12-09 16:55:01.209812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:05:38.236  [2024-12-09 16:55:01.212710] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:05:38.236  passed
00:05:38.236    Test: blockdev write read 8 blocks ...passed
00:05:38.236    Test: blockdev write read size > 128k ...passed
00:05:38.236    Test: blockdev write read invalid size ...passed
00:05:38.236    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:38.236    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:38.236    Test: blockdev write read max offset ...passed
00:05:38.236    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:38.236    Test: blockdev writev readv 8 blocks ...passed
00:05:38.236    Test: blockdev writev readv 30 x 1block ...passed
00:05:38.236    Test: blockdev writev readv block ...passed
00:05:38.236    Test: blockdev writev readv size > 128k ...passed
00:05:38.236    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:38.236    Test: blockdev comparev and writev ...[2024-12-09 16:55:01.222411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb406000 len:0x1000
00:05:38.236  [2024-12-09 16:55:01.223032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:05:38.236  passed
00:05:38.236    Test: blockdev nvme passthru rw ...passed
00:05:38.236    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:01.224787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:05:38.236  [2024-12-09 16:55:01.225325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:05:38.236  passed
00:05:38.236    Test: blockdev nvme admin passthru ...passed
00:05:38.236    Test: blockdev copy ...passed
00:05:38.236  Suite: bdevio tests on: Nvme2n2
00:05:38.236    Test: blockdev write read block ...passed
00:05:38.236    Test: blockdev write zeroes read block ...passed
00:05:38.236    Test: blockdev write zeroes read no split ...passed
00:05:38.236    Test: blockdev write zeroes read split ...passed
00:05:38.499    Test: blockdev write zeroes read split partial ...passed
00:05:38.499    Test: blockdev reset ...[2024-12-09 16:55:01.276971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:05:38.499  passed
00:05:38.499    Test: blockdev write read 8 blocks ...[2024-12-09 16:55:01.281495] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:05:38.499  passed
00:05:38.499    Test: blockdev write read size > 128k ...passed
00:05:38.499    Test: blockdev write read invalid size ...passed
00:05:38.499    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:38.499    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:38.499    Test: blockdev write read max offset ...passed
00:05:38.499    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:38.499    Test: blockdev writev readv 8 blocks ...passed
00:05:38.499    Test: blockdev writev readv 30 x 1block ...passed
00:05:38.499    Test: blockdev writev readv block ...passed
00:05:38.499    Test: blockdev writev readv size > 128k ...passed
00:05:38.499    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:38.499    Test: blockdev comparev and writev ...[2024-12-09 16:55:01.290122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc03c000 len:0x1000
00:05:38.499  [2024-12-09 16:55:01.290190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:05:38.499  passed
00:05:38.499    Test: blockdev nvme passthru rw ...passed
00:05:38.499    Test: blockdev nvme passthru vendor specific ...passed
00:05:38.500    Test: blockdev nvme admin passthru ...[2024-12-09 16:55:01.291051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:05:38.500  [2024-12-09 16:55:01.291100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:05:38.500  passed
00:05:38.500    Test: blockdev copy ...passed
00:05:38.500  Suite: bdevio tests on: Nvme2n1
00:05:38.500    Test: blockdev write read block ...passed
00:05:38.500    Test: blockdev write zeroes read block ...passed
00:05:38.500    Test: blockdev write zeroes read no split ...passed
00:05:38.500    Test: blockdev write zeroes read split ...passed
00:05:38.500    Test: blockdev write zeroes read split partial ...passed
00:05:38.500    Test: blockdev reset ...[2024-12-09 16:55:01.345699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:05:38.500  passed
00:05:38.500    Test: blockdev write read 8 blocks ...[2024-12-09 16:55:01.348716] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:05:38.500  passed
00:05:38.500    Test: blockdev write read size > 128k ...passed
00:05:38.500    Test: blockdev write read invalid size ...passed
00:05:38.500    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:38.500    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:38.500    Test: blockdev write read max offset ...passed
00:05:38.500    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:38.500    Test: blockdev writev readv 8 blocks ...passed
00:05:38.500    Test: blockdev writev readv 30 x 1block ...passed
00:05:38.500    Test: blockdev writev readv block ...passed
00:05:38.500    Test: blockdev writev readv size > 128k ...passed
00:05:38.500    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:38.500    Test: blockdev comparev and writev ...[2024-12-09 16:55:01.355165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc038000 len:0x1000
00:05:38.500  [2024-12-09 16:55:01.355227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:05:38.500  passed
00:05:38.500    Test: blockdev nvme passthru rw ...passed
00:05:38.500    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:01.355985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:05:38.500  [2024-12-09 16:55:01.356116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:05:38.500  passed
00:05:38.500    Test: blockdev nvme admin passthru ...passed
00:05:38.500    Test: blockdev copy ...passed
00:05:38.500  Suite: bdevio tests on: Nvme1n1
00:05:38.500    Test: blockdev write read block ...passed
00:05:38.500    Test: blockdev write zeroes read block ...passed
00:05:38.500    Test: blockdev write zeroes read no split ...passed
00:05:38.500    Test: blockdev write zeroes read split ...passed
00:05:38.500    Test: blockdev write zeroes read split partial ...passed
00:05:38.500    Test: blockdev reset ...[2024-12-09 16:55:01.414136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:05:38.500  [2024-12-09 16:55:01.416694] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:05:38.500  passed
00:05:38.500    Test: blockdev write read 8 blocks ...passed
00:05:38.500    Test: blockdev write read size > 128k ...passed
00:05:38.500    Test: blockdev write read invalid size ...passed
00:05:38.500    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:38.500    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:38.500    Test: blockdev write read max offset ...passed
00:05:38.500    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:38.500    Test: blockdev writev readv 8 blocks ...passed
00:05:38.500    Test: blockdev writev readv 30 x 1block ...passed
00:05:38.500    Test: blockdev writev readv block ...passed
00:05:38.500    Test: blockdev writev readv size > 128k ...passed
00:05:38.500    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:38.500    Test: blockdev comparev and writev ...[2024-12-09 16:55:01.435104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc034000 len:0x1000
00:05:38.500  [2024-12-09 16:55:01.435318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:05:38.500  passed
00:05:38.500    Test: blockdev nvme passthru rw ...passed
00:05:38.500    Test: blockdev nvme passthru vendor specific ...passed
00:05:38.500    Test: blockdev nvme admin passthru ...[2024-12-09 16:55:01.438168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:05:38.500  [2024-12-09 16:55:01.438291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:05:38.500  passed
00:05:38.500    Test: blockdev copy ...passed
00:05:38.500  Suite: bdevio tests on: Nvme0n1
00:05:38.500    Test: blockdev write read block ...passed
00:05:38.500    Test: blockdev write zeroes read block ...passed
00:05:38.500    Test: blockdev write zeroes read no split ...passed
00:05:38.500    Test: blockdev write zeroes read split ...passed
00:05:38.500    Test: blockdev write zeroes read split partial ...passed
00:05:38.500    Test: blockdev reset ...[2024-12-09 16:55:01.496249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:05:38.500  [2024-12-09 16:55:01.499018] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:05:38.500  passed
00:05:38.500    Test: blockdev write read 8 blocks ...passed
00:05:38.500    Test: blockdev write read size > 128k ...passed
00:05:38.500    Test: blockdev write read invalid size ...passed
00:05:38.500    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:38.500    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:38.500    Test: blockdev write read max offset ...passed
00:05:38.500    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:38.500    Test: blockdev writev readv 8 blocks ...passed
00:05:38.500    Test: blockdev writev readv 30 x 1block ...passed
00:05:38.500    Test: blockdev writev readv block ...passed
00:05:38.500    Test: blockdev writev readv size > 128k ...passed
00:05:38.500    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:38.500    Test: blockdev comparev and writev ...passed
00:05:38.500    Test: blockdev nvme passthru rw ...[2024-12-09 16:55:01.505633] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has
00:05:38.500  separate metadata which is not supported yet.
00:05:38.500  passed
00:05:38.500    Test: blockdev nvme passthru vendor specific ...passed
00:05:38.500    Test: blockdev nvme admin passthru ...[2024-12-09 16:55:01.506172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0
00:05:38.500  [2024-12-09 16:55:01.506273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
00:05:38.500  passed
00:05:38.500    Test: blockdev copy ...passed
00:05:38.500  
00:05:38.500  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:38.500                suites      6      6    n/a      0        0
00:05:38.500                 tests    138    138    138      0        0
00:05:38.500               asserts    893    893    893      0      n/a
00:05:38.500  
00:05:38.500  Elapsed time =    1.034 seconds
00:05:38.500  0
00:05:38.500   16:55:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61204
00:05:38.500   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61204 ']'
00:05:38.500   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61204
00:05:38.500    16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:05:38.500   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:38.500    16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61204
00:05:38.781   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:38.781   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:38.781   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61204'
00:05:38.781  killing process with pid 61204
00:05:38.781   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61204
00:05:38.781   16:55:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61204
00:05:39.351   16:55:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:05:39.351  
00:05:39.351  real	0m2.131s
00:05:39.351  user	0m5.385s
00:05:39.351  sys	0m0.289s
00:05:39.351   16:55:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:39.351   16:55:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:05:39.351  ************************************
00:05:39.351  END TEST bdev_bounds
00:05:39.351  ************************************
00:05:39.351   16:55:02 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:05:39.351   16:55:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:05:39.351   16:55:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:39.351   16:55:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:39.351  ************************************
00:05:39.351  START TEST bdev_nbd
00:05:39.351  ************************************
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:05:39.351    16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6
00:05:39.351   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61258
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61258 /var/tmp/spdk-nbd.sock
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61258 ']'
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:39.352  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:39.352   16:55:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:05:39.352  [2024-12-09 16:55:02.358774] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:39.352  [2024-12-09 16:55:02.359058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:05:39.612  [2024-12-09 16:55:02.520265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:39.612  [2024-12-09 16:55:02.625935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:05:40.184   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:40.184    16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:05:40.444    16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:40.444  1+0 records in
00:05:40.444  1+0 records out
00:05:40.444  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618329 s, 6.6 MB/s
00:05:40.444    16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:05:40.444   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:40.445    16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:05:40.704    16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:40.704  1+0 records in
00:05:40.704  1+0 records out
00:05:40.704  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533659 s, 7.7 MB/s
00:05:40.704    16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:40.704   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:40.965   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:40.965   16:55:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:40.965   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:05:40.965   16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:40.965    16:55:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:05:41.224    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:41.224  1+0 records in
00:05:41.224  1+0 records out
00:05:41.224  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862983 s, 4.7 MB/s
00:05:41.224    16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:05:41.224   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:41.224    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:05:41.482    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:41.482  1+0 records in
00:05:41.482  1+0 records out
00:05:41.482  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893405 s, 4.6 MB/s
00:05:41.482    16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:41.482    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:05:41.482    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:05:41.482   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:41.742   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:41.742   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:41.742   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:41.742  1+0 records in
00:05:41.742  1+0 records out
00:05:41.742  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113744 s, 3.6 MB/s
00:05:41.742    16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:41.743    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:05:41.743    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:41.743  1+0 records in
00:05:41.743  1+0 records out
00:05:41.743  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101133 s, 4.1 MB/s
00:05:41.743    16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:05:41.743   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:05:41.743    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:42.003   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd0",
00:05:42.003      "bdev_name": "Nvme0n1"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd1",
00:05:42.003      "bdev_name": "Nvme1n1"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd2",
00:05:42.003      "bdev_name": "Nvme2n1"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd3",
00:05:42.003      "bdev_name": "Nvme2n2"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd4",
00:05:42.003      "bdev_name": "Nvme2n3"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd5",
00:05:42.003      "bdev_name": "Nvme3n1"
00:05:42.003    }
00:05:42.003  ]'
00:05:42.003   16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:05:42.003    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd0",
00:05:42.003      "bdev_name": "Nvme0n1"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd1",
00:05:42.003      "bdev_name": "Nvme1n1"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd2",
00:05:42.003      "bdev_name": "Nvme2n1"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd3",
00:05:42.003      "bdev_name": "Nvme2n2"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd4",
00:05:42.003      "bdev_name": "Nvme2n3"
00:05:42.003    },
00:05:42.003    {
00:05:42.003      "nbd_device": "/dev/nbd5",
00:05:42.003      "bdev_name": "Nvme3n1"
00:05:42.003    }
00:05:42.003  ]'
00:05:42.003    16:55:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5'
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5')
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:42.003   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:42.264    16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:42.264   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:42.562    16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:42.562   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:05:42.823    16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:42.823   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:05:43.081    16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:43.081   16:55:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:05:43.339    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:43.339   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:05:43.598    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:43.598   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:43.598    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:43.598    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:43.598     16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:43.856    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:43.856     16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:43.856     16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:43.856    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:43.856     16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:05:43.856     16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:43.856     16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:05:43.856    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:05:43.856    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:43.856   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:05:43.856  /dev/nbd0
00:05:44.114    16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:44.115  1+0 records in
00:05:44.115  1+0 records out
00:05:44.115  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360938 s, 11.3 MB/s
00:05:44.115    16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:44.115   16:55:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1
00:05:44.115  /dev/nbd1
00:05:44.115    16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:44.115   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:44.115   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:44.115   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:44.115   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:44.115   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:44.115   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:44.373  1+0 records in
00:05:44.373  1+0 records out
00:05:44.373  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591086 s, 6.9 MB/s
00:05:44.373    16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10
00:05:44.373  /dev/nbd10
00:05:44.373    16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:44.373  1+0 records in
00:05:44.373  1+0 records out
00:05:44.373  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469162 s, 8.7 MB/s
00:05:44.373    16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:44.373   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11
00:05:44.940  /dev/nbd11
00:05:44.940    16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:44.940  1+0 records in
00:05:44.940  1+0 records out
00:05:44.940  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515943 s, 7.9 MB/s
00:05:44.940    16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12
00:05:44.940  /dev/nbd12
00:05:44.940    16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:44.940  1+0 records in
00:05:44.940  1+0 records out
00:05:44.940  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044723 s, 9.2 MB/s
00:05:44.940    16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:44.940   16:55:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13
00:05:45.198  /dev/nbd13
00:05:45.198    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:05:45.198  1+0 records in
00:05:45.198  1+0 records out
00:05:45.198  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314201 s, 13.0 MB/s
00:05:45.198    16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:45.198   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:05:45.198    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:45.198    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:45.198     16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:45.456    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd0",
00:05:45.456      "bdev_name": "Nvme0n1"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd1",
00:05:45.456      "bdev_name": "Nvme1n1"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd10",
00:05:45.456      "bdev_name": "Nvme2n1"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd11",
00:05:45.456      "bdev_name": "Nvme2n2"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd12",
00:05:45.456      "bdev_name": "Nvme2n3"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd13",
00:05:45.456      "bdev_name": "Nvme3n1"
00:05:45.456    }
00:05:45.456  ]'
00:05:45.456     16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd0",
00:05:45.456      "bdev_name": "Nvme0n1"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd1",
00:05:45.456      "bdev_name": "Nvme1n1"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd10",
00:05:45.456      "bdev_name": "Nvme2n1"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd11",
00:05:45.456      "bdev_name": "Nvme2n2"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd12",
00:05:45.456      "bdev_name": "Nvme2n3"
00:05:45.456    },
00:05:45.456    {
00:05:45.456      "nbd_device": "/dev/nbd13",
00:05:45.456      "bdev_name": "Nvme3n1"
00:05:45.456    }
00:05:45.456  ]'
00:05:45.456     16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:45.456    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:45.456  /dev/nbd1
00:05:45.456  /dev/nbd10
00:05:45.456  /dev/nbd11
00:05:45.456  /dev/nbd12
00:05:45.456  /dev/nbd13'
00:05:45.456     16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:45.456  /dev/nbd1
00:05:45.456  /dev/nbd10
00:05:45.456  /dev/nbd11
00:05:45.456  /dev/nbd12
00:05:45.456  /dev/nbd13'
00:05:45.456     16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:45.456    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6
00:05:45.456    16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']'
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:05:45.456  256+0 records in
00:05:45.456  256+0 records out
00:05:45.456  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00892075 s, 118 MB/s
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:45.456   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:45.715  256+0 records in
00:05:45.715  256+0 records out
00:05:45.715  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0650244 s, 16.1 MB/s
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:45.715  256+0 records in
00:05:45.715  256+0 records out
00:05:45.715  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0716567 s, 14.6 MB/s
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:05:45.715  256+0 records in
00:05:45.715  256+0 records out
00:05:45.715  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0666691 s, 15.7 MB/s
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:05:45.715  256+0 records in
00:05:45.715  256+0 records out
00:05:45.715  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667546 s, 15.7 MB/s
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:45.715   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:05:45.973  256+0 records in
00:05:45.973  256+0 records out
00:05:45.973  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660679 s, 15.9 MB/s
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:05:45.973  256+0 records in
00:05:45.973  256+0 records out
00:05:45.973  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.068677 s, 15.3 MB/s
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:45.973   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:45.974   16:55:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:46.232    16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:46.232   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:46.491    16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:46.491   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:05:46.749    16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:46.749   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:05:47.007    16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:47.007   16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:05:47.007    16:55:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:47.007   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:05:47.268    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:47.268   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:47.268    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:47.268    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:47.268     16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:47.526    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:47.526     16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:47.526     16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:47.526    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:47.526     16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:05:47.526     16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:47.526     16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:05:47.526    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:05:47.526    16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:05:47.526   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:05:47.787  malloc_lvol_verify
00:05:47.787   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:05:48.048  04c1dd62-3577-4de0-bca6-5c30682edbbd
00:05:48.048   16:55:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:05:48.048  eb134d83-922d-4bc8-b42c-aa0e116cb80f
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:05:48.313  /dev/nbd0
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:05:48.313  Discarding device blocks:    0/4096mke2fs 1.47.0 (5-Feb-2023)
00:05:48.313           done                            
00:05:48.313  Creating filesystem with 4096 1k blocks and 1024 inodes
00:05:48.313  
00:05:48.313  Allocating group tables: 0/1   done                            
00:05:48.313  Writing inode tables: 0/1   done                            
00:05:48.313  Creating journal (1024 blocks): done
00:05:48.313  Writing superblocks and filesystem accounting information: 0/1   done
00:05:48.313  
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:48.313   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:48.573    16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61258
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61258 ']'
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61258
00:05:48.573    16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:48.573    16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61258
00:05:48.573  killing process with pid 61258
00:05:48.573   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:48.574   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:48.574   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61258'
00:05:48.574   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61258
00:05:48.574   16:55:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61258
00:05:49.517   16:55:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:05:49.517  
00:05:49.517  real	0m10.045s
00:05:49.517  user	0m14.543s
00:05:49.517  sys	0m3.178s
00:05:49.517   16:55:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:49.517   16:55:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:05:49.517  ************************************
00:05:49.517  END TEST bdev_nbd
00:05:49.517  ************************************
00:05:49.517   16:55:12 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:05:49.517   16:55:12 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']'
00:05:49.517  skipping fio tests on NVMe due to multi-ns failures.
00:05:49.517   16:55:12 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:05:49.517   16:55:12 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:05:49.517   16:55:12 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:05:49.517   16:55:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:05:49.517   16:55:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:49.517   16:55:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:49.517  ************************************
00:05:49.517  START TEST bdev_verify
00:05:49.517  ************************************
00:05:49.517   16:55:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:05:49.517  [2024-12-09 16:55:12.471303] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:49.517  [2024-12-09 16:55:12.471472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61638 ]
00:05:49.780  [2024-12-09 16:55:12.631975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:49.780  [2024-12-09 16:55:12.736884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:49.780  [2024-12-09 16:55:12.736907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:50.353  Running I/O for 5 seconds...
00:05:52.695      17024.00 IOPS,    66.50 MiB/s
[2024-12-09T16:55:16.677Z]     16832.00 IOPS,    65.75 MiB/s
[2024-12-09T16:55:17.620Z]     17087.33 IOPS,    66.75 MiB/s
[2024-12-09T16:55:18.562Z]     17848.50 IOPS,    69.72 MiB/s
[2024-12-09T16:55:18.562Z]     18446.80 IOPS,    72.06 MiB/s
00:05:55.521                                                                                                  Latency(us)
00:05:55.521  
[2024-12-09T16:55:18.562Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:05:55.521  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0x0 length 0xbd0bd
00:05:55.521  	 Nvme0n1             :       5.05    1496.63       5.85       0.00     0.00   85206.55   19156.68  104051.00
00:05:55.521  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:05:55.521  	 Nvme0n1             :       5.07    1541.23       6.02       0.00     0.00   82835.54   17140.18  105664.20
00:05:55.521  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0x0 length 0xa0000
00:05:55.521  	 Nvme1n1             :       5.07    1503.56       5.87       0.00     0.00   84606.37    8166.79  104857.60
00:05:55.521  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0xa0000 length 0xa0000
00:05:55.521  	 Nvme1n1             :       5.07    1539.26       6.01       0.00     0.00   82783.39   22080.59  109697.18
00:05:55.521  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0x0 length 0x80000
00:05:55.521  	 Nvme2n1             :       5.07    1503.11       5.87       0.00     0.00   84407.34    8721.33  109697.18
00:05:55.521  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0x80000 length 0x80000
00:05:55.521  	 Nvme2n1             :       5.07    1532.04       5.98       0.00     0.00   82793.47   22584.71  126635.72
00:05:55.521  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0x0 length 0x80000
00:05:55.521  	 Nvme2n2             :       5.08    1511.34       5.90       0.00     0.00   83849.25   10586.58  114536.76
00:05:55.521  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:55.521  	 Verification LBA range: start 0x80000 length 0x80000
00:05:55.522  	 Nvme2n2             :       5.08    1527.86       5.97       0.00     0.00   82790.78   15426.17  127442.31
00:05:55.522  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:55.522  	 Verification LBA range: start 0x0 length 0x80000
00:05:55.522  	 Nvme2n3             :       5.08    1499.86       5.86       0.00     0.00   84289.58   16938.54  126635.72
00:05:55.522  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:55.522  	 Verification LBA range: start 0x80000 length 0x80000
00:05:55.522  	 Nvme2n3             :       5.08    1536.17       6.00       0.00     0.00   82351.95    4184.22  129055.51
00:05:55.522  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:55.522  	 Verification LBA range: start 0x0 length 0x20000
00:05:55.522  	 Nvme3n1             :       5.09    1509.28       5.90       0.00     0.00   83619.27    4889.99  125022.52
00:05:55.522  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:55.522  	 Verification LBA range: start 0x20000 length 0x20000
00:05:55.522  	 Nvme3n1             :       5.11    1534.92       6.00       0.00     0.00   81988.91    3276.80  127442.31
00:05:55.522  
[2024-12-09T16:55:18.563Z]  ===================================================================================================================
00:05:55.522  
[2024-12-09T16:55:18.563Z]  Total                       :              18235.25      71.23       0.00     0.00   83448.65    3276.80  129055.51
00:05:56.905  
00:05:56.905  real	0m7.221s
00:05:56.905  user	0m13.496s
00:05:56.905  sys	0m0.224s
00:05:56.905  ************************************
00:05:56.905  END TEST bdev_verify
00:05:56.905  ************************************
00:05:56.905   16:55:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:56.905   16:55:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:05:56.905   16:55:19 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:05:56.905   16:55:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:05:56.905   16:55:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:56.905   16:55:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:05:56.905  ************************************
00:05:56.905  START TEST bdev_verify_big_io
00:05:56.905  ************************************
00:05:56.905   16:55:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:05:56.905  [2024-12-09 16:55:19.736587] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:05:56.905  [2024-12-09 16:55:19.736711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ]
00:05:56.905  [2024-12-09 16:55:19.896348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:57.167  [2024-12-09 16:55:20.000703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:57.167  [2024-12-09 16:55:20.000796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:57.738  Running I/O for 5 seconds...
00:06:01.611        732.00 IOPS,    45.75 MiB/s
[2024-12-09T16:55:26.582Z]      1638.00 IOPS,   102.38 MiB/s
[2024-12-09T16:55:26.843Z]      1896.67 IOPS,   118.54 MiB/s
00:06:03.803                                                                                                  Latency(us)
00:06:03.803  
[2024-12-09T16:55:26.844Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:03.803  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x0 length 0xbd0b
00:06:03.803  	 Nvme0n1             :       5.80     110.40       6.90       0.00     0.00 1120045.29   21072.34 1167952.34
00:06:03.803  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:06:03.803  	 Nvme0n1             :       5.69     112.56       7.04       0.00     0.00 1096278.02   27625.94 1155046.79
00:06:03.803  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x0 length 0xa000
00:06:03.803  	 Nvme1n1             :       5.80     110.36       6.90       0.00     0.00 1080820.34  112116.97  974369.08
00:06:03.803  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0xa000 length 0xa000
00:06:03.803  	 Nvme1n1             :       5.69     112.52       7.03       0.00     0.00 1060404.62  115343.36  967916.31
00:06:03.803  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x0 length 0x8000
00:06:03.803  	 Nvme2n1             :       5.87     113.19       7.07       0.00     0.00 1017998.40   72997.02  884030.23
00:06:03.803  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x8000 length 0x8000
00:06:03.803  	 Nvme2n1             :       5.80     114.25       7.14       0.00     0.00 1006525.90  112116.97 1000180.18
00:06:03.803  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x0 length 0x8000
00:06:03.803  	 Nvme2n2             :       5.90     119.34       7.46       0.00     0.00  939069.01   21173.17  974369.08
00:06:03.803  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x8000 length 0x8000
00:06:03.803  	 Nvme2n2             :       5.94     124.22       7.76       0.00     0.00  902535.64   22584.71 1025991.29
00:06:03.803  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x0 length 0x8000
00:06:03.803  	 Nvme2n3             :       5.95     115.15       7.20       0.00     0.00  938503.78   46580.97 2077793.67
00:06:03.803  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x8000 length 0x8000
00:06:03.803  	 Nvme2n3             :       5.94     129.22       8.08       0.00     0.00  842790.33   45572.73 1064707.94
00:06:03.803  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x0 length 0x2000
00:06:03.803  	 Nvme3n1             :       6.04     135.11       8.44       0.00     0.00  777418.46     863.31 2103604.78
00:06:03.803  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:03.803  	 Verification LBA range: start 0x2000 length 0x2000
00:06:03.803  	 Nvme3n1             :       6.01     148.98       9.31       0.00     0.00  708702.97     759.34 1090519.04
00:06:03.803  
[2024-12-09T16:55:26.844Z]  ===================================================================================================================
00:06:03.803  
[2024-12-09T16:55:26.844Z]  Total                       :               1445.30      90.33       0.00     0.00  944062.60     759.34 2103604.78
00:06:05.716  
00:06:05.716  real	0m8.555s
00:06:05.716  user	0m16.178s
00:06:05.716  sys	0m0.231s
00:06:05.716   16:55:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:05.716   16:55:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:06:05.716  ************************************
00:06:05.716  END TEST bdev_verify_big_io
00:06:05.716  ************************************
00:06:05.716   16:55:28 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:05.716   16:55:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:05.716   16:55:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:05.716   16:55:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:05.716  ************************************
00:06:05.716  START TEST bdev_write_zeroes
00:06:05.716  ************************************
00:06:05.716   16:55:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:05.716  [2024-12-09 16:55:28.360657] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:05.716  [2024-12-09 16:55:28.360768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61850 ]
00:06:05.716  [2024-12-09 16:55:28.519510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:05.716  [2024-12-09 16:55:28.624387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:06.289  Running I/O for 1 seconds...
00:06:07.228      30145.00 IOPS,   117.75 MiB/s
00:06:07.228                                                                                                  Latency(us)
00:06:07.228  
[2024-12-09T16:55:30.269Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:07.228  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:07.228  	 Nvme0n1             :       1.02    4453.49      17.40       0.00     0.00   28688.76    5747.00  178257.92
00:06:07.228  Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:07.228  	 Nvme1n1             :       1.02    5261.71      20.55       0.00     0.00   24246.67    8469.27  166965.56
00:06:07.228  Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:07.228  	 Nvme2n1             :       1.02    5193.11      20.29       0.00     0.00   24490.33    8670.92  166158.97
00:06:07.228  Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:07.228  	 Nvme2n2             :       1.02    5124.71      20.02       0.00     0.00   24782.31    8570.09  166158.97
00:06:07.228  Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:07.228  	 Nvme2n3             :       1.03    5243.48      20.48       0.00     0.00   24187.57    8519.68  166158.97
00:06:07.228  Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:07.228  	 Nvme3n1             :       1.03    5299.89      20.70       0.00     0.00   23897.29    8418.86  166158.97
00:06:07.228  
[2024-12-09T16:55:30.269Z]  ===================================================================================================================
00:06:07.228  
[2024-12-09T16:55:30.269Z]  Total                       :              30576.39     119.44       0.00     0.00   24952.05    5747.00  178257.92
00:06:08.166  ************************************
00:06:08.166  END TEST bdev_write_zeroes
00:06:08.166  ************************************
00:06:08.166  
00:06:08.166  real	0m2.707s
00:06:08.166  user	0m2.387s
00:06:08.166  sys	0m0.205s
00:06:08.166   16:55:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:08.166   16:55:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:06:08.166   16:55:31 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:08.166   16:55:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:08.166   16:55:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:08.166   16:55:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:08.166  ************************************
00:06:08.166  START TEST bdev_json_nonenclosed
00:06:08.166  ************************************
00:06:08.166   16:55:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:08.166  [2024-12-09 16:55:31.134735] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:08.166  [2024-12-09 16:55:31.134867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61903 ]
00:06:08.426  [2024-12-09 16:55:31.295890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:08.426  [2024-12-09 16:55:31.399269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:08.426  [2024-12-09 16:55:31.399353] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:06:08.426  [2024-12-09 16:55:31.399371] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:08.426  [2024-12-09 16:55:31.399380] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:08.685  
00:06:08.685  real	0m0.506s
00:06:08.685  user	0m0.311s
00:06:08.685  sys	0m0.090s
00:06:08.685   16:55:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:08.685  ************************************
00:06:08.685  END TEST bdev_json_nonenclosed
00:06:08.685  ************************************
00:06:08.685   16:55:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:06:08.685   16:55:31 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:08.685   16:55:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:08.685   16:55:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:08.685   16:55:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:08.685  ************************************
00:06:08.685  START TEST bdev_json_nonarray
00:06:08.685  ************************************
00:06:08.685   16:55:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:08.685  [2024-12-09 16:55:31.695759] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:08.685  [2024-12-09 16:55:31.695890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ]
00:06:08.945  [2024-12-09 16:55:31.856401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:08.945  [2024-12-09 16:55:31.956810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:08.945  [2024-12-09 16:55:31.956909] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:06:08.945  [2024-12-09 16:55:31.956928] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:08.945  [2024-12-09 16:55:31.956938] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:09.206  
00:06:09.206  real	0m0.502s
00:06:09.206  user	0m0.303s
00:06:09.206  sys	0m0.095s
00:06:09.206   16:55:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:09.206  ************************************
00:06:09.206  END TEST bdev_json_nonarray
00:06:09.206   16:55:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:06:09.206  ************************************
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]]
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]]
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]]
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:06:09.206   16:55:32 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:06:09.206  
00:06:09.206  real	0m36.870s
00:06:09.206  user	0m57.131s
00:06:09.206  sys	0m5.239s
00:06:09.206   16:55:32 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:09.206   16:55:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:06:09.206  ************************************
00:06:09.206  END TEST blockdev_nvme
00:06:09.206  ************************************
00:06:09.206    16:55:32  -- spdk/autotest.sh@209 -- # uname -s
00:06:09.206   16:55:32  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:06:09.206   16:55:32  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:06:09.206   16:55:32  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:09.206   16:55:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:09.206   16:55:32  -- common/autotest_common.sh@10 -- # set +x
00:06:09.496  ************************************
00:06:09.496  START TEST blockdev_nvme_gpt
00:06:09.496  ************************************
00:06:09.496   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:06:09.496  * Looking for test storage...
00:06:09.496  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:09.496     16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version
00:06:09.496     16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:09.496     16:55:32 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:09.496    16:55:32 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:09.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.496  		--rc genhtml_branch_coverage=1
00:06:09.496  		--rc genhtml_function_coverage=1
00:06:09.496  		--rc genhtml_legend=1
00:06:09.496  		--rc geninfo_all_blocks=1
00:06:09.496  		--rc geninfo_unexecuted_blocks=1
00:06:09.496  		
00:06:09.496  		'
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:09.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.496  		--rc genhtml_branch_coverage=1
00:06:09.496  		--rc genhtml_function_coverage=1
00:06:09.496  		--rc genhtml_legend=1
00:06:09.496  		--rc geninfo_all_blocks=1
00:06:09.496  		--rc geninfo_unexecuted_blocks=1
00:06:09.496  		
00:06:09.496  		'
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:09.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.496  		--rc genhtml_branch_coverage=1
00:06:09.496  		--rc genhtml_function_coverage=1
00:06:09.496  		--rc genhtml_legend=1
00:06:09.496  		--rc geninfo_all_blocks=1
00:06:09.496  		--rc geninfo_unexecuted_blocks=1
00:06:09.496  		
00:06:09.496  		'
00:06:09.496    16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:09.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.496  		--rc genhtml_branch_coverage=1
00:06:09.496  		--rc genhtml_function_coverage=1
00:06:09.496  		--rc genhtml_legend=1
00:06:09.496  		--rc geninfo_all_blocks=1
00:06:09.496  		--rc geninfo_unexecuted_blocks=1
00:06:09.497  		
00:06:09.497  		'
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:06:09.497    16:55:32 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:06:09.497    16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device=
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek=
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx=
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]]
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]]
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62007
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62007
00:06:09.497   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62007 ']'
00:06:09.497   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:09.497   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:09.497  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:09.497   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:09.497   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:09.497   16:55:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:09.497   16:55:32 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:06:09.497  [2024-12-09 16:55:32.492979] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:09.497  [2024-12-09 16:55:32.493108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62007 ]
00:06:09.804  [2024-12-09 16:55:32.650828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:09.804  [2024-12-09 16:55:32.754385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:10.376   16:55:33 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:10.376   16:55:33 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:06:10.376   16:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:06:10.376   16:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf
00:06:10.376   16:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:10.637  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:10.896  Waiting for block devices as requested
00:06:10.896  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:06:10.896  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:11.157  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:06:11.157  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:06:16.446  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1')
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:06:16.446    16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:06:16.446  BYT;
00:06:16.446  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:06:16.446  BYT;
00:06:16.446  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:06:16.446    16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:06:16.446     16:55:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:06:16.446   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:06:16.446    16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:16.446    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:06:16.447    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:06:16.447     16:55:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:06:16.447    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:06:16.447    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:06:16.447    16:55:39 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:06:16.447   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:06:16.447   16:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:06:17.387  The operation has completed successfully.
00:06:17.388   16:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:06:18.790  The operation has completed successfully.
00:06:18.790   16:55:41 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:19.050  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:19.616  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:19.616  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:19.616  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:06:19.616  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:06:19.616   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:06:19.616   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.616   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:19.616  []
00:06:19.616   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:19.616   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:06:19.616   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:06:19.616   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:06:19.616    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:19.616   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\'''
00:06:19.616   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.616   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:19.874   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:19.874   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:06:19.874   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.874   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:19.874   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:19.874   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat
00:06:19.874    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:19.874    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:19.874    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:19.874   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:06:19.874    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:06:19.874    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:19.874    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:20.132    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:20.132   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:06:20.132    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name
00:06:20.133    16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "54714b1b-e60d-4254-8420-7b817f5b3483"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "54714b1b-e60d-4254-8420-7b817f5b3483",' '  "numa_id": -1,' '  "md_size": 64,' '  "md_interleave": false,' '  "dif_type": 0,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": true,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme1n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}' '{' '  "name": "Nvme2n1",' '  "aliases": [' '    "21c943b5-fc90-46a4-9014-4922afcfc422"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "21c943b5-fc90-46a4-9014-4922afcfc422",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n2",' '  "aliases": [' '    "f856d5be-16a9-4952-a9ee-598828f15064"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "f856d5be-16a9-4952-a9ee-598828f15064",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 2,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n3",' '  "aliases": [' '    "f355bffa-8224-4cb4-aaee-efc5001e638e"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "f355bffa-8224-4cb4-aaee-efc5001e638e",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 3,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme3n1",' '  "aliases": [' '    "1be9a469-e1e4-4f26-a2fd-c0696fb7d2fb"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "1be9a469-e1e4-4f26-a2fd-c0696fb7d2fb",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:13.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:13.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12343",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": true,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": true' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:06:20.133   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:06:20.133   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1
00:06:20.133   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:06:20.133   16:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62007
00:06:20.133   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62007 ']'
00:06:20.133   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62007
00:06:20.133    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:06:20.133   16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:20.133    16:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62007
00:06:20.133   16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:20.133   16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:20.133  killing process with pid 62007
00:06:20.133   16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62007'
00:06:20.133   16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62007
00:06:20.133   16:55:43 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62007
00:06:21.506   16:55:44 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:06:21.506   16:55:44 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:06:21.506   16:55:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:06:21.506   16:55:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:21.506   16:55:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:21.506  ************************************
00:06:21.506  START TEST bdev_hello_world
00:06:21.506  ************************************
00:06:21.506   16:55:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:06:21.506  [2024-12-09 16:55:44.262062] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:21.506  [2024-12-09 16:55:44.262182] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62635 ]
00:06:21.506  [2024-12-09 16:55:44.420010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:21.506  [2024-12-09 16:55:44.503046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:22.081  [2024-12-09 16:55:45.001675] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:06:22.081  [2024-12-09 16:55:45.001728] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:06:22.081  [2024-12-09 16:55:45.001753] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:06:22.081  [2024-12-09 16:55:45.004214] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:06:22.081  [2024-12-09 16:55:45.004972] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:06:22.081  [2024-12-09 16:55:45.005014] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:06:22.081  [2024-12-09 16:55:45.005483] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:06:22.081  
00:06:22.081  [2024-12-09 16:55:45.005517] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:06:23.025  
00:06:23.025  real	0m1.537s
00:06:23.025  user	0m1.270s
00:06:23.025  sys	0m0.161s
00:06:23.025   16:55:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:06:23.026  ************************************
00:06:23.026  END TEST bdev_hello_world
00:06:23.026  ************************************
00:06:23.026   16:55:45 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:06:23.026   16:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:23.026   16:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:23.026   16:55:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:23.026  ************************************
00:06:23.026  START TEST bdev_bounds
00:06:23.026  ************************************
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62666
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62666'
00:06:23.026  Process bdevio pid: 62666
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62666
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62666 ']'
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:23.026  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:23.026   16:55:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:06:23.026  [2024-12-09 16:55:45.852856] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:23.026  [2024-12-09 16:55:45.852979] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62666 ]
00:06:23.026  [2024-12-09 16:55:46.012365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:23.286  [2024-12-09 16:55:46.117822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:23.286  [2024-12-09 16:55:46.118050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:23.286  [2024-12-09 16:55:46.118124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:23.858   16:55:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:23.858   16:55:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:06:23.858   16:55:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:06:23.858  I/O targets:
00:06:23.858    Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:06:23.858    Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:06:23.858    Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:06:23.858    Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:06:23.858    Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:06:23.858    Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:06:23.858    Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB)
00:06:23.858  
00:06:23.858  
00:06:23.858       CUnit - A unit testing framework for C - Version 2.1-3
00:06:23.858       http://cunit.sourceforge.net/
00:06:23.858  
00:06:23.858  
00:06:23.858  Suite: bdevio tests on: Nvme3n1
00:06:23.858    Test: blockdev write read block ...passed
00:06:23.858    Test: blockdev write zeroes read block ...passed
00:06:23.858    Test: blockdev write zeroes read no split ...passed
00:06:23.858    Test: blockdev write zeroes read split ...passed
00:06:23.858    Test: blockdev write zeroes read split partial ...passed
00:06:23.858    Test: blockdev reset ...[2024-12-09 16:55:46.825955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller
00:06:23.858  [2024-12-09 16:55:46.829189] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful.
00:06:23.858  passed
00:06:23.858    Test: blockdev write read 8 blocks ...passed
00:06:23.858    Test: blockdev write read size > 128k ...passed
00:06:23.858    Test: blockdev write read invalid size ...passed
00:06:23.858    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:23.858    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:23.858    Test: blockdev write read max offset ...passed
00:06:23.858    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:23.858    Test: blockdev writev readv 8 blocks ...passed
00:06:23.858    Test: blockdev writev readv 30 x 1block ...passed
00:06:23.858    Test: blockdev writev readv block ...passed
00:06:23.858    Test: blockdev writev readv size > 128k ...passed
00:06:23.858    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:23.858    Test: blockdev comparev and writev ...[2024-12-09 16:55:46.840272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bec04000 len:0x1000
00:06:23.858  [2024-12-09 16:55:46.840334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:23.858  passed
00:06:23.858    Test: blockdev nvme passthru rw ...passed
00:06:23.858    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:46.841956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:06:23.858  [2024-12-09 16:55:46.841991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:23.858  passed
00:06:23.858    Test: blockdev nvme admin passthru ...passed
00:06:23.858    Test: blockdev copy ...passed
00:06:23.858  Suite: bdevio tests on: Nvme2n3
00:06:23.858    Test: blockdev write read block ...passed
00:06:23.858    Test: blockdev write zeroes read block ...passed
00:06:23.858    Test: blockdev write zeroes read no split ...passed
00:06:23.858    Test: blockdev write zeroes read split ...passed
00:06:24.167    Test: blockdev write zeroes read split partial ...passed
00:06:24.167    Test: blockdev reset ...[2024-12-09 16:55:46.897671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:06:24.167  [2024-12-09 16:55:46.901528] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:06:24.167  passed
00:06:24.167    Test: blockdev write read 8 blocks ...passed
00:06:24.167    Test: blockdev write read size > 128k ...passed
00:06:24.167    Test: blockdev write read invalid size ...passed
00:06:24.167    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:24.167    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:24.167    Test: blockdev write read max offset ...passed
00:06:24.167    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:24.167    Test: blockdev writev readv 8 blocks ...passed
00:06:24.167    Test: blockdev writev readv 30 x 1block ...passed
00:06:24.167    Test: blockdev writev readv block ...passed
00:06:24.167    Test: blockdev writev readv size > 128k ...passed
00:06:24.167    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:24.167    Test: blockdev comparev and writev ...[2024-12-09 16:55:46.918302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bec02000 len:0x1000
00:06:24.167  [2024-12-09 16:55:46.918346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme passthru rw ...passed
00:06:24.167    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:46.919955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:06:24.167  [2024-12-09 16:55:46.920061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme admin passthru ...passed
00:06:24.167    Test: blockdev copy ...passed
00:06:24.167  Suite: bdevio tests on: Nvme2n2
00:06:24.167    Test: blockdev write read block ...passed
00:06:24.167    Test: blockdev write zeroes read block ...passed
00:06:24.167    Test: blockdev write zeroes read no split ...passed
00:06:24.167    Test: blockdev write zeroes read split ...passed
00:06:24.167    Test: blockdev write zeroes read split partial ...passed
00:06:24.167    Test: blockdev reset ...[2024-12-09 16:55:46.977610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:06:24.167  [2024-12-09 16:55:46.980867] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:06:24.167  passed
00:06:24.167    Test: blockdev write read 8 blocks ...passed
00:06:24.167    Test: blockdev write read size > 128k ...passed
00:06:24.167    Test: blockdev write read invalid size ...passed
00:06:24.167    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:24.167    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:24.167    Test: blockdev write read max offset ...passed
00:06:24.167    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:24.167    Test: blockdev writev readv 8 blocks ...passed
00:06:24.167    Test: blockdev writev readv 30 x 1block ...passed
00:06:24.167    Test: blockdev writev readv block ...passed
00:06:24.167    Test: blockdev writev readv size > 128k ...passed
00:06:24.167    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:24.167    Test: blockdev comparev and writev ...[2024-12-09 16:55:46.998716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5838000 len:0x1000
00:06:24.167  [2024-12-09 16:55:46.998764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme passthru rw ...passed
00:06:24.167    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:47.000985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:06:24.167  [2024-12-09 16:55:47.001028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme admin passthru ...passed
00:06:24.167    Test: blockdev copy ...passed
00:06:24.167  Suite: bdevio tests on: Nvme2n1
00:06:24.167    Test: blockdev write read block ...passed
00:06:24.167    Test: blockdev write zeroes read block ...passed
00:06:24.167    Test: blockdev write zeroes read no split ...passed
00:06:24.167    Test: blockdev write zeroes read split ...passed
00:06:24.167    Test: blockdev write zeroes read split partial ...passed
00:06:24.167    Test: blockdev reset ...[2024-12-09 16:55:47.061857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:06:24.167  [2024-12-09 16:55:47.065139] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:06:24.167  passed
00:06:24.167    Test: blockdev write read 8 blocks ...passed
00:06:24.167    Test: blockdev write read size > 128k ...passed
00:06:24.167    Test: blockdev write read invalid size ...passed
00:06:24.167    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:24.167    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:24.167    Test: blockdev write read max offset ...passed
00:06:24.167    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:24.167    Test: blockdev writev readv 8 blocks ...passed
00:06:24.167    Test: blockdev writev readv 30 x 1block ...passed
00:06:24.167    Test: blockdev writev readv block ...passed
00:06:24.167    Test: blockdev writev readv size > 128k ...passed
00:06:24.167    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:24.167    Test: blockdev comparev and writev ...[2024-12-09 16:55:47.077900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5834000 len:0x1000
00:06:24.167  [2024-12-09 16:55:47.077956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme passthru rw ...passed
00:06:24.167    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:47.079311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:06:24.167  [2024-12-09 16:55:47.079340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme admin passthru ...passed
00:06:24.167    Test: blockdev copy ...passed
00:06:24.167  Suite: bdevio tests on: Nvme1n1p2
00:06:24.167    Test: blockdev write read block ...passed
00:06:24.167    Test: blockdev write zeroes read block ...passed
00:06:24.167    Test: blockdev write zeroes read no split ...passed
00:06:24.167    Test: blockdev write zeroes read split ...passed
00:06:24.167    Test: blockdev write zeroes read split partial ...passed
00:06:24.167    Test: blockdev reset ...[2024-12-09 16:55:47.132683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:06:24.167  [2024-12-09 16:55:47.136064] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:06:24.167  passed
00:06:24.167    Test: blockdev write read 8 blocks ...passed
00:06:24.167    Test: blockdev write read size > 128k ...passed
00:06:24.167    Test: blockdev write read invalid size ...passed
00:06:24.167    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:24.167    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:24.167    Test: blockdev write read max offset ...passed
00:06:24.167    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:24.167    Test: blockdev writev readv 8 blocks ...passed
00:06:24.167    Test: blockdev writev readv 30 x 1block ...passed
00:06:24.167    Test: blockdev writev readv block ...passed
00:06:24.167    Test: blockdev writev readv size > 128k ...passed
00:06:24.167    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:24.167    Test: blockdev comparev and writev ...[2024-12-09 16:55:47.145295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d5830000 len:0x1000
00:06:24.167  [2024-12-09 16:55:47.145342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:24.167  passed
00:06:24.167    Test: blockdev nvme passthru rw ...passed
00:06:24.167    Test: blockdev nvme passthru vendor specific ...passed
00:06:24.167    Test: blockdev nvme admin passthru ...passed
00:06:24.167    Test: blockdev copy ...passed
00:06:24.167  Suite: bdevio tests on: Nvme1n1p1
00:06:24.167    Test: blockdev write read block ...passed
00:06:24.167    Test: blockdev write zeroes read block ...passed
00:06:24.167    Test: blockdev write zeroes read no split ...passed
00:06:24.167    Test: blockdev write zeroes read split ...passed
00:06:24.443    Test: blockdev write zeroes read split partial ...passed
00:06:24.443    Test: blockdev reset ...[2024-12-09 16:55:47.199376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:06:24.443  [2024-12-09 16:55:47.203583] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:06:24.443  passed
00:06:24.443    Test: blockdev write read 8 blocks ...passed
00:06:24.443    Test: blockdev write read size > 128k ...passed
00:06:24.443    Test: blockdev write read invalid size ...passed
00:06:24.443    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:24.443    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:24.443    Test: blockdev write read max offset ...passed
00:06:24.443    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:24.443    Test: blockdev writev readv 8 blocks ...passed
00:06:24.443    Test: blockdev writev readv 30 x 1block ...passed
00:06:24.443    Test: blockdev writev readv block ...passed
00:06:24.443    Test: blockdev writev readv size > 128k ...passed
00:06:24.443    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:24.443    Test: blockdev comparev and writev ...[2024-12-09 16:55:47.215297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b540e000 len:0x1000
00:06:24.443  [2024-12-09 16:55:47.215337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:06:24.443  passed
00:06:24.443    Test: blockdev nvme passthru rw ...passed
00:06:24.443    Test: blockdev nvme passthru vendor specific ...passed
00:06:24.443    Test: blockdev nvme admin passthru ...passed
00:06:24.443    Test: blockdev copy ...passed
00:06:24.443  Suite: bdevio tests on: Nvme0n1
00:06:24.443    Test: blockdev write read block ...passed
00:06:24.443    Test: blockdev write zeroes read block ...passed
00:06:24.443    Test: blockdev write zeroes read no split ...passed
00:06:24.443    Test: blockdev write zeroes read split ...passed
00:06:24.443    Test: blockdev write zeroes read split partial ...passed
00:06:24.443    Test: blockdev reset ...[2024-12-09 16:55:47.264535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:06:24.443  [2024-12-09 16:55:47.268501] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:06:24.443  passed
00:06:24.443    Test: blockdev write read 8 blocks ...passed
00:06:24.443    Test: blockdev write read size > 128k ...passed
00:06:24.443    Test: blockdev write read invalid size ...passed
00:06:24.443    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:06:24.443    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:06:24.443    Test: blockdev write read max offset ...passed
00:06:24.443    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:06:24.443    Test: blockdev writev readv 8 blocks ...passed
00:06:24.443    Test: blockdev writev readv 30 x 1block ...passed
00:06:24.443    Test: blockdev writev readv block ...passed
00:06:24.443    Test: blockdev writev readv size > 128k ...passed
00:06:24.443    Test: blockdev writev readv size > 128k in two iovs ...passed
00:06:24.443    Test: blockdev comparev and writev ...passed
00:06:24.443    Test: blockdev nvme passthru rw ...[2024-12-09 16:55:47.281672] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has
00:06:24.443  separate metadata which is not supported yet.
00:06:24.443  passed
00:06:24.443    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:55:47.283103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0
00:06:24.443  [2024-12-09 16:55:47.283216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
00:06:24.443  passed
00:06:24.443    Test: blockdev nvme admin passthru ...passed
00:06:24.443    Test: blockdev copy ...passed
00:06:24.443  
00:06:24.443  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:24.443                suites      7      7    n/a      0        0
00:06:24.443                 tests    161    161    161      0        0
00:06:24.443               asserts   1025   1025   1025      0      n/a
00:06:24.443  
00:06:24.443  Elapsed time =    1.288 seconds
00:06:24.443  0
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62666
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62666 ']'
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62666
00:06:24.443    16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:24.443    16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62666
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62666'
00:06:24.443  killing process with pid 62666
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62666
00:06:24.443   16:55:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62666
00:06:25.012   16:55:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:06:25.012  
00:06:25.012  real	0m2.213s
00:06:25.012  user	0m5.524s
00:06:25.012  sys	0m0.324s
00:06:25.012   16:55:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:25.012   16:55:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:06:25.012  ************************************
00:06:25.012  END TEST bdev_bounds
00:06:25.012  ************************************
00:06:25.012   16:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:06:25.012   16:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:25.012   16:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:25.013   16:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:25.013  ************************************
00:06:25.013  START TEST bdev_nbd
00:06:25.013  ************************************
00:06:25.013   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:06:25.013    16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62726
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62726 /var/tmp/spdk-nbd.sock
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62726 ']'
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:25.271  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:06:25.271   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:06:25.271  [2024-12-09 16:55:48.118257] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:25.271  [2024-12-09 16:55:48.118371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:06:25.271  [2024-12-09 16:55:48.280321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:25.529  [2024-12-09 16:55:48.378459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:06:26.099   16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:26.099    16:55:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:06:26.361    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:26.361  1+0 records in
00:06:26.361  1+0 records out
00:06:26.361  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728781 s, 5.6 MB/s
00:06:26.361    16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:26.361   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:26.361    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:06:26.622    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:26.622  1+0 records in
00:06:26.622  1+0 records out
00:06:26.622  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418164 s, 9.8 MB/s
00:06:26.622    16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:26.622   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:26.622    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:06:26.880    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:26.880  1+0 records in
00:06:26.880  1+0 records out
00:06:26.880  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729128 s, 5.6 MB/s
00:06:26.880    16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:26.880   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:26.880    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:06:27.137    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:27.137   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:27.137  1+0 records in
00:06:27.137  1+0 records out
00:06:27.138  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305499 s, 13.4 MB/s
00:06:27.138    16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.138   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:27.138   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.138   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:27.138   16:55:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:27.138   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:27.138   16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:27.138    16:55:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:06:27.395    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:27.395  1+0 records in
00:06:27.395  1+0 records out
00:06:27.395  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564305 s, 7.3 MB/s
00:06:27.395    16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:27.395   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:27.395    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:06:27.653    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:27.653  1+0 records in
00:06:27.653  1+0 records out
00:06:27.653  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352498 s, 11.6 MB/s
00:06:27.653    16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:27.653    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:06:27.653    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:27.653   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:27.912  1+0 records in
00:06:27.912  1+0 records out
00:06:27.912  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737432 s, 5.6 MB/s
00:06:27.912    16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:06:27.912    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd0",
00:06:27.912      "bdev_name": "Nvme0n1"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd1",
00:06:27.912      "bdev_name": "Nvme1n1p1"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd2",
00:06:27.912      "bdev_name": "Nvme1n1p2"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd3",
00:06:27.912      "bdev_name": "Nvme2n1"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd4",
00:06:27.912      "bdev_name": "Nvme2n2"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd5",
00:06:27.912      "bdev_name": "Nvme2n3"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd6",
00:06:27.912      "bdev_name": "Nvme3n1"
00:06:27.912    }
00:06:27.912  ]'
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:06:27.912    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd0",
00:06:27.912      "bdev_name": "Nvme0n1"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd1",
00:06:27.912      "bdev_name": "Nvme1n1p1"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd2",
00:06:27.912      "bdev_name": "Nvme1n1p2"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd3",
00:06:27.912      "bdev_name": "Nvme2n1"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd4",
00:06:27.912      "bdev_name": "Nvme2n2"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd5",
00:06:27.912      "bdev_name": "Nvme2n3"
00:06:27.912    },
00:06:27.912    {
00:06:27.912      "nbd_device": "/dev/nbd6",
00:06:27.912      "bdev_name": "Nvme3n1"
00:06:27.912    }
00:06:27.912  ]'
00:06:27.912    16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6'
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6')
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:27.912   16:55:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:28.170    16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:28.170   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:28.427    16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:28.427   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:06:28.685    16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:28.685   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:06:28.943    16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:28.943   16:55:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:06:29.278    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:06:29.278    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:29.278   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:06:29.536    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:29.536   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:29.536    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:29.536    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.536     16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:29.794    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:29.794     16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:29.794     16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:29.794    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:29.794     16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:06:29.794     16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:29.794     16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:06:29.794    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:06:29.794    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:29.794   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:06:30.053  /dev/nbd0
00:06:30.053    16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.053  1+0 records in
00:06:30.053  1+0 records out
00:06:30.053  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424811 s, 9.6 MB/s
00:06:30.053    16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:30.053   16:55:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1
00:06:30.311  /dev/nbd1
00:06:30.311    16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.311  1+0 records in
00:06:30.311  1+0 records out
00:06:30.311  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370967 s, 11.0 MB/s
00:06:30.311    16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:30.311   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10
00:06:30.311  /dev/nbd10
00:06:30.569    16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.569  1+0 records in
00:06:30.569  1+0 records out
00:06:30.569  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034724 s, 11.8 MB/s
00:06:30.569    16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11
00:06:30.569  /dev/nbd11
00:06:30.569    16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.569   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:06:30.570   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.570   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.570   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.570   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.570  1+0 records in
00:06:30.570  1+0 records out
00:06:30.570  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452574 s, 9.1 MB/s
00:06:30.570    16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12
00:06:30.828  /dev/nbd12
00:06:30.828    16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:30.828  1+0 records in
00:06:30.828  1+0 records out
00:06:30.828  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405751 s, 10.1 MB/s
00:06:30.828    16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:30.828   16:55:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13
00:06:31.085  /dev/nbd13
00:06:31.085    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:06:31.085   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:06:31.085   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:06:31.085   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:31.085   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:31.085   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:31.086  1+0 records in
00:06:31.086  1+0 records out
00:06:31.086  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375267 s, 10.9 MB/s
00:06:31.086    16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:31.086   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14
00:06:31.343  /dev/nbd14
00:06:31.343    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:06:31.343  1+0 records in
00:06:31.343  1+0 records out
00:06:31.343  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414152 s, 9.9 MB/s
00:06:31.343    16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:31.343   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:06:31.343    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:31.344    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:31.344     16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:31.602    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd0",
00:06:31.602      "bdev_name": "Nvme0n1"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd1",
00:06:31.602      "bdev_name": "Nvme1n1p1"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd10",
00:06:31.602      "bdev_name": "Nvme1n1p2"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd11",
00:06:31.602      "bdev_name": "Nvme2n1"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd12",
00:06:31.602      "bdev_name": "Nvme2n2"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd13",
00:06:31.602      "bdev_name": "Nvme2n3"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd14",
00:06:31.602      "bdev_name": "Nvme3n1"
00:06:31.602    }
00:06:31.602  ]'
00:06:31.602     16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd0",
00:06:31.602      "bdev_name": "Nvme0n1"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd1",
00:06:31.602      "bdev_name": "Nvme1n1p1"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd10",
00:06:31.602      "bdev_name": "Nvme1n1p2"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd11",
00:06:31.602      "bdev_name": "Nvme2n1"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd12",
00:06:31.602      "bdev_name": "Nvme2n2"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd13",
00:06:31.602      "bdev_name": "Nvme2n3"
00:06:31.602    },
00:06:31.602    {
00:06:31.602      "nbd_device": "/dev/nbd14",
00:06:31.602      "bdev_name": "Nvme3n1"
00:06:31.602    }
00:06:31.602  ]'
00:06:31.602     16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:31.602    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:31.602  /dev/nbd1
00:06:31.602  /dev/nbd10
00:06:31.602  /dev/nbd11
00:06:31.602  /dev/nbd12
00:06:31.602  /dev/nbd13
00:06:31.602  /dev/nbd14'
00:06:31.602     16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:31.602  /dev/nbd1
00:06:31.602  /dev/nbd10
00:06:31.602  /dev/nbd11
00:06:31.602  /dev/nbd12
00:06:31.602  /dev/nbd13
00:06:31.602  /dev/nbd14'
00:06:31.602     16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:31.602    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7
00:06:31.602    16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']'
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:06:31.602  256+0 records in
00:06:31.602  256+0 records out
00:06:31.602  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752017 s, 139 MB/s
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:31.602  256+0 records in
00:06:31.602  256+0 records out
00:06:31.602  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0701116 s, 15.0 MB/s
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:31.602   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:31.860  256+0 records in
00:06:31.860  256+0 records out
00:06:31.860  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0763052 s, 13.7 MB/s
00:06:31.860   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:31.860   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:06:31.860  256+0 records in
00:06:31.860  256+0 records out
00:06:31.860  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0744178 s, 14.1 MB/s
00:06:31.860   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:31.860   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:06:31.860  256+0 records in
00:06:31.860  256+0 records out
00:06:31.860  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0746289 s, 14.1 MB/s
00:06:31.860   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:31.860   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:06:32.118  256+0 records in
00:06:32.118  256+0 records out
00:06:32.118  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0720353 s, 14.6 MB/s
00:06:32.118   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:32.118   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:06:32.118  256+0 records in
00:06:32.118  256+0 records out
00:06:32.118  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0721103 s, 14.5 MB/s
00:06:32.118   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:32.118   16:55:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:06:32.118  256+0 records in
00:06:32.118  256+0 records out
00:06:32.118  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0807768 s, 13.0 MB/s
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:06:32.118   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:32.119   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:32.376    16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:32.376   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:32.633    16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:32.633   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:06:32.891    16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:32.891   16:55:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:06:33.307    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:06:33.307    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:33.307   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:06:33.568    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:33.568   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:06:33.829    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:33.829   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:33.829    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:33.829    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:33.829     16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:34.088    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:34.088     16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:34.088     16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:34.088    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:34.088     16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:34.088     16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:06:34.088     16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:06:34.088    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:06:34.088    16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:06:34.088   16:55:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:06:34.345  malloc_lvol_verify
00:06:34.345   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:06:34.345  ac609e02-2412-443a-bdc1-7d406df89dbf
00:06:34.345   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:06:34.602  f949cec9-e7f3-400d-9443-d1372f2e7256
00:06:34.602   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:06:34.859  /dev/nbd0
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:06:34.859  mke2fs 1.47.0 (5-Feb-2023)
00:06:34.859  Discarding device blocks:    0/4096         done                            
00:06:34.859  Creating filesystem with 4096 1k blocks and 1024 inodes
00:06:34.859  
00:06:34.859  Allocating group tables: 0/1   done                            
00:06:34.859  Writing inode tables: 0/1   done                            
00:06:34.859  Creating journal (1024 blocks): done
00:06:34.859  Writing superblocks and filesystem accounting information: 0/1   done
00:06:34.859  
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:34.859   16:55:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:35.118    16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62726
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62726 ']'
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62726
00:06:35.118    16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:35.118    16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62726
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:35.118  killing process with pid 62726
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62726'
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62726
00:06:35.118   16:55:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62726
00:06:38.394   16:56:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:06:38.394  
00:06:38.394  real	0m13.093s
00:06:38.394  user	0m17.340s
00:06:38.394  sys	0m3.999s
00:06:38.394   16:56:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:38.394  ************************************
00:06:38.394  END TEST bdev_nbd
00:06:38.394  ************************************
00:06:38.394   16:56:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:06:38.394   16:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:06:38.394   16:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']'
00:06:38.394  skipping fio tests on NVMe due to multi-ns failures.
00:06:38.394   16:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']'
00:06:38.394   16:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:06:38.394   16:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:06:38.394   16:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:06:38.394   16:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:06:38.394   16:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.394   16:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:38.394  ************************************
00:06:38.394  START TEST bdev_verify
00:06:38.394  ************************************
00:06:38.394   16:56:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:06:38.394  [2024-12-09 16:56:01.245674] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:38.394  [2024-12-09 16:56:01.245789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63137 ]
00:06:38.394  [2024-12-09 16:56:01.403349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:38.651  [2024-12-09 16:56:01.506212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:38.651  [2024-12-09 16:56:01.506452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:39.239  Running I/O for 5 seconds...
00:06:41.556      20160.00 IOPS,    78.75 MiB/s
[2024-12-09T16:56:05.532Z]     20512.00 IOPS,    80.12 MiB/s
[2024-12-09T16:56:06.494Z]     21141.33 IOPS,    82.58 MiB/s
[2024-12-09T16:56:07.430Z]     21504.00 IOPS,    84.00 MiB/s
[2024-12-09T16:56:07.430Z]     22054.40 IOPS,    86.15 MiB/s
00:06:44.389                                                                                                  Latency(us)
00:06:44.389  
[2024-12-09T16:56:07.430Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:44.389  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0xbd0bd
00:06:44.389  	 Nvme0n1             :       5.04    1522.97       5.95       0.00     0.00   83662.60   15728.64   85902.57
00:06:44.389  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:06:44.389  	 Nvme0n1             :       5.07    1566.72       6.12       0.00     0.00   81469.64   13812.97   93968.54
00:06:44.389  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0x4ff80
00:06:44.389  	 Nvme1n1p1           :       5.06    1528.98       5.97       0.00     0.00   83254.86    6377.16   75820.11
00:06:44.389  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:06:44.389  	 Nvme1n1p1           :       5.07    1566.01       6.12       0.00     0.00   81208.92   13611.32   77836.60
00:06:44.389  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0x4ff7f
00:06:44.389  	 Nvme1n1p2           :       5.07    1528.21       5.97       0.00     0.00   83148.49    7511.43   72997.02
00:06:44.389  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:06:44.389  	 Nvme1n1p2           :       5.07    1565.05       6.11       0.00     0.00   81032.32   12502.25   69367.34
00:06:44.389  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0x80000
00:06:44.389  	 Nvme2n1             :       5.08    1536.70       6.00       0.00     0.00   82657.73    9326.28   68560.74
00:06:44.389  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x80000 length 0x80000
00:06:44.389  	 Nvme2n1             :       5.07    1564.68       6.11       0.00     0.00   80865.11   12250.19   66544.25
00:06:44.389  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0x80000
00:06:44.389  	 Nvme2n2             :       5.08    1535.61       6.00       0.00     0.00   82529.92   11796.48   72190.42
00:06:44.389  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x80000 length 0x80000
00:06:44.389  	 Nvme2n2             :       5.08    1574.43       6.15       0.00     0.00   80231.16    2571.03   68560.74
00:06:44.389  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0x80000
00:06:44.389  	 Nvme2n3             :       5.09    1535.19       6.00       0.00     0.00   82377.76   12149.37   75820.11
00:06:44.389  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x80000 length 0x80000
00:06:44.389  	 Nvme2n3             :       5.09    1583.92       6.19       0.00     0.00   79634.58    5797.42   69770.63
00:06:44.389  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x0 length 0x20000
00:06:44.389  	 Nvme3n1             :       5.09    1534.79       6.00       0.00     0.00   82216.15   12451.84   80659.69
00:06:44.389  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:06:44.389  	 Verification LBA range: start 0x20000 length 0x20000
00:06:44.390  	 Nvme3n1             :       5.09    1583.57       6.19       0.00     0.00   79569.31    6024.27   74206.92
00:06:44.390  
[2024-12-09T16:56:07.431Z]  ===================================================================================================================
00:06:44.390  
[2024-12-09T16:56:07.431Z]  Total                       :              21726.83      84.87       0.00     0.00   81685.24    2571.03   93968.54
00:06:45.324  
00:06:45.324  real	0m7.087s
00:06:45.324  user	0m13.249s
00:06:45.324  sys	0m0.224s
00:06:45.324  ************************************
00:06:45.324  END TEST bdev_verify
00:06:45.324  ************************************
00:06:45.324   16:56:08 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:45.324   16:56:08 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:06:45.324   16:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:06:45.324   16:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:06:45.324   16:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:45.324   16:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:45.324  ************************************
00:06:45.324  START TEST bdev_verify_big_io
00:06:45.324  ************************************
00:06:45.324   16:56:08 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:06:45.582  [2024-12-09 16:56:08.403816] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:45.582  [2024-12-09 16:56:08.403963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63235 ]
00:06:45.582  [2024-12-09 16:56:08.565380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:45.840  [2024-12-09 16:56:08.667089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:45.840  [2024-12-09 16:56:08.667220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:46.407  Running I/O for 5 seconds...
00:06:52.621       2376.00 IOPS,   148.50 MiB/s
[2024-12-09T16:56:15.662Z]      3262.50 IOPS,   203.91 MiB/s
00:06:52.621                                                                                                  Latency(us)
00:06:52.621  
[2024-12-09T16:56:15.662Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:52.621  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0xbd0b
00:06:52.621  	 Nvme0n1             :       5.88      89.73       5.61       0.00     0.00 1362525.52   10687.41 1948738.17
00:06:52.621  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:06:52.621  	 Nvme0n1             :       5.91     103.24       6.45       0.00     0.00 1166668.24   18350.08 1297007.85
00:06:52.621  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0x4ff8
00:06:52.621  	 Nvme1n1p1           :       6.11      92.52       5.78       0.00     0.00 1276609.52   34885.32 1974549.27
00:06:52.621  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:06:52.621  	 Nvme1n1p1           :       6.02      82.44       5.15       0.00     0.00 1432994.24   63721.16 2103604.78
00:06:52.621  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0x4ff7
00:06:52.621  	 Nvme1n1p2           :       6.11      92.50       5.78       0.00     0.00 1230228.05   57671.68 2026171.47
00:06:52.621  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:06:52.621  	 Nvme1n1p2           :       6.02      79.76       4.99       0.00     0.00 1432558.99  147607.24 2129415.88
00:06:52.621  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0x8000
00:06:52.621  	 Nvme2n1             :       6.14      96.61       6.04       0.00     0.00 1151581.35   81062.99 2064888.12
00:06:52.621  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x8000 length 0x8000
00:06:52.621  	 Nvme2n1             :       6.02     111.13       6.95       0.00     0.00 1010940.84  104857.60 1142141.24
00:06:52.621  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0x8000
00:06:52.621  	 Nvme2n2             :       6.15     101.55       6.35       0.00     0.00 1064928.78   28230.89 2090699.22
00:06:52.621  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x8000 length 0x8000
00:06:52.621  	 Nvme2n2             :       6.12     115.61       7.23       0.00     0.00  941109.79   66947.54 1096971.82
00:06:52.621  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0x8000
00:06:52.621  	 Nvme2n3             :       6.17     112.47       7.03       0.00     0.00  929079.00   14922.04 1497043.89
00:06:52.621  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x8000 length 0x8000
00:06:52.621  	 Nvme2n3             :       6.15     124.91       7.81       0.00     0.00  850783.97   30650.68 1135688.47
00:06:52.621  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x0 length 0x2000
00:06:52.621  	 Nvme3n1             :       6.24     146.87       9.18       0.00     0.00  691228.80     259.94 1548666.09
00:06:52.621  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:06:52.621  	 Verification LBA range: start 0x2000 length 0x2000
00:06:52.621  	 Nvme3n1             :       6.16     135.10       8.44       0.00     0.00  760587.33    4285.05 1167952.34
00:06:52.621  
[2024-12-09T16:56:15.662Z]  ===================================================================================================================
00:06:52.621  
[2024-12-09T16:56:15.662Z]  Total                       :               1484.45      92.78       0.00     0.00 1049972.06     259.94 2129415.88
00:06:54.046  
00:06:54.046  real	0m8.711s
00:06:54.046  user	0m16.475s
00:06:54.046  sys	0m0.239s
00:06:54.046   16:56:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.046   16:56:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:06:54.046  ************************************
00:06:54.046  END TEST bdev_verify_big_io
00:06:54.046  ************************************
00:06:54.046   16:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:54.046   16:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:54.046   16:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:54.046   16:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:54.303  ************************************
00:06:54.303  START TEST bdev_write_zeroes
00:06:54.303  ************************************
00:06:54.303   16:56:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:54.303  [2024-12-09 16:56:17.155661] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:54.303  [2024-12-09 16:56:17.155793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63352 ]
00:06:54.303  [2024-12-09 16:56:17.307136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:54.562  [2024-12-09 16:56:17.405166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:55.126  Running I/O for 1 seconds...
00:06:56.072      69888.00 IOPS,   273.00 MiB/s
00:06:56.072                                                                                                  Latency(us)
00:06:56.072  
[2024-12-09T16:56:19.113Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:56.072  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme0n1             :       1.02    9952.78      38.88       0.00     0.00   12834.27    9931.22   24399.56
00:06:56.072  Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme1n1p1           :       1.02    9940.39      38.83       0.00     0.00   12831.16   10939.47   23794.61
00:06:56.072  Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme1n1p2           :       1.02    9928.28      38.78       0.00     0.00   12820.74   10687.41   23088.84
00:06:56.072  Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme2n1             :       1.03    9917.11      38.74       0.00     0.00   12811.50   10485.76   22383.06
00:06:56.072  Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme2n2             :       1.03    9905.90      38.69       0.00     0.00   12792.73    9023.80   21979.77
00:06:56.072  Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme2n3             :       1.03    9894.79      38.65       0.00     0.00   12782.33    7763.50   22988.01
00:06:56.072  Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:06:56.072  	 Nvme3n1             :       1.03    9883.71      38.61       0.00     0.00   12776.61    7108.14   24399.56
00:06:56.072  
[2024-12-09T16:56:19.113Z]  ===================================================================================================================
00:06:56.072  
[2024-12-09T16:56:19.113Z]  Total                       :              69422.96     271.18       0.00     0.00   12807.05    7108.14   24399.56
00:06:57.004  
00:06:57.004  real	0m2.686s
00:06:57.004  user	0m2.379s
00:06:57.004  sys	0m0.194s
00:06:57.004   16:56:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.004   16:56:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:06:57.004  ************************************
00:06:57.004  END TEST bdev_write_zeroes
00:06:57.004  ************************************
00:06:57.004   16:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:57.004   16:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:57.004   16:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.004   16:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:57.005  ************************************
00:06:57.005  START TEST bdev_json_nonenclosed
00:06:57.005  ************************************
00:06:57.005   16:56:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:57.005  [2024-12-09 16:56:19.877503] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:57.005  [2024-12-09 16:56:19.877619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ]
00:06:57.005  [2024-12-09 16:56:20.034580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:57.262  [2024-12-09 16:56:20.133637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:57.262  [2024-12-09 16:56:20.133714] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:06:57.262  [2024-12-09 16:56:20.133731] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:57.262  [2024-12-09 16:56:20.133740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:57.519  
00:06:57.519  real	0m0.497s
00:06:57.519  user	0m0.299s
00:06:57.519  sys	0m0.094s
00:06:57.519   16:56:20 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.519   16:56:20 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:06:57.519  ************************************
00:06:57.519  END TEST bdev_json_nonenclosed
00:06:57.519  ************************************
00:06:57.519   16:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:57.519   16:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:06:57.519   16:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.519   16:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:57.519  ************************************
00:06:57.519  START TEST bdev_json_nonarray
00:06:57.519  ************************************
00:06:57.519   16:56:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:06:57.519  [2024-12-09 16:56:20.417704] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:57.519  [2024-12-09 16:56:20.417822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63425 ]
00:06:57.776  [2024-12-09 16:56:20.576807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:57.776  [2024-12-09 16:56:20.675041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:57.776  [2024-12-09 16:56:20.675131] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:06:57.776  [2024-12-09 16:56:20.675148] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:06:57.776  [2024-12-09 16:56:20.675156] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:58.034  
00:06:58.034  real	0m0.496s
00:06:58.034  user	0m0.300s
00:06:58.034  sys	0m0.092s
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:06:58.034  ************************************
00:06:58.034  END TEST bdev_json_nonarray
00:06:58.034  ************************************
00:06:58.034   16:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]]
00:06:58.034   16:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]]
00:06:58.034   16:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:06:58.034   16:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.034   16:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.034   16:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:06:58.034  ************************************
00:06:58.034  START TEST bdev_gpt_uuid
00:06:58.034  ************************************
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63456
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63456
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63456 ']'
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:58.034  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:06:58.034   16:56:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:58.034  [2024-12-09 16:56:20.965536] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:06:58.034  [2024-12-09 16:56:20.965650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63456 ]
00:06:58.296  [2024-12-09 16:56:21.124604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:58.296  [2024-12-09 16:56:21.222339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:58.862   16:56:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:58.862   16:56:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:06:58.862   16:56:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:58.862   16:56:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:58.862   16:56:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:59.120  Some configs were skipped because the RPC state that can call them passed over.
00:06:59.120   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:59.120   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine
00:06:59.120   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:59.120   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:59.120   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:59.120    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:06:59.120    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:59.120    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:59.378    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:59.378   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[
00:06:59.378  {
00:06:59.378  "name": "Nvme1n1p1",
00:06:59.378  "aliases": [
00:06:59.378  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:06:59.378  ],
00:06:59.378  "product_name": "GPT Disk",
00:06:59.378  "block_size": 4096,
00:06:59.378  "num_blocks": 655104,
00:06:59.378  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:06:59.378  "assigned_rate_limits": {
00:06:59.378  "rw_ios_per_sec": 0,
00:06:59.378  "rw_mbytes_per_sec": 0,
00:06:59.378  "r_mbytes_per_sec": 0,
00:06:59.378  "w_mbytes_per_sec": 0
00:06:59.378  },
00:06:59.378  "claimed": false,
00:06:59.378  "zoned": false,
00:06:59.378  "supported_io_types": {
00:06:59.378  "read": true,
00:06:59.378  "write": true,
00:06:59.378  "unmap": true,
00:06:59.378  "flush": true,
00:06:59.378  "reset": true,
00:06:59.378  "nvme_admin": false,
00:06:59.378  "nvme_io": false,
00:06:59.378  "nvme_io_md": false,
00:06:59.378  "write_zeroes": true,
00:06:59.378  "zcopy": false,
00:06:59.378  "get_zone_info": false,
00:06:59.378  "zone_management": false,
00:06:59.378  "zone_append": false,
00:06:59.378  "compare": true,
00:06:59.378  "compare_and_write": false,
00:06:59.378  "abort": true,
00:06:59.378  "seek_hole": false,
00:06:59.378  "seek_data": false,
00:06:59.378  "copy": true,
00:06:59.378  "nvme_iov_md": false
00:06:59.378  },
00:06:59.378  "driver_specific": {
00:06:59.378  "gpt": {
00:06:59.378  "base_bdev": "Nvme1n1",
00:06:59.378  "offset_blocks": 256,
00:06:59.379  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:06:59.379  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:06:59.379  "partition_name": "SPDK_TEST_first"
00:06:59.379  }
00:06:59.379  }
00:06:59.379  }
00:06:59.379  ]'
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]]
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]'
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[
00:06:59.379  {
00:06:59.379  "name": "Nvme1n1p2",
00:06:59.379  "aliases": [
00:06:59.379  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:06:59.379  ],
00:06:59.379  "product_name": "GPT Disk",
00:06:59.379  "block_size": 4096,
00:06:59.379  "num_blocks": 655103,
00:06:59.379  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:06:59.379  "assigned_rate_limits": {
00:06:59.379  "rw_ios_per_sec": 0,
00:06:59.379  "rw_mbytes_per_sec": 0,
00:06:59.379  "r_mbytes_per_sec": 0,
00:06:59.379  "w_mbytes_per_sec": 0
00:06:59.379  },
00:06:59.379  "claimed": false,
00:06:59.379  "zoned": false,
00:06:59.379  "supported_io_types": {
00:06:59.379  "read": true,
00:06:59.379  "write": true,
00:06:59.379  "unmap": true,
00:06:59.379  "flush": true,
00:06:59.379  "reset": true,
00:06:59.379  "nvme_admin": false,
00:06:59.379  "nvme_io": false,
00:06:59.379  "nvme_io_md": false,
00:06:59.379  "write_zeroes": true,
00:06:59.379  "zcopy": false,
00:06:59.379  "get_zone_info": false,
00:06:59.379  "zone_management": false,
00:06:59.379  "zone_append": false,
00:06:59.379  "compare": true,
00:06:59.379  "compare_and_write": false,
00:06:59.379  "abort": true,
00:06:59.379  "seek_hole": false,
00:06:59.379  "seek_data": false,
00:06:59.379  "copy": true,
00:06:59.379  "nvme_iov_md": false
00:06:59.379  },
00:06:59.379  "driver_specific": {
00:06:59.379  "gpt": {
00:06:59.379  "base_bdev": "Nvme1n1",
00:06:59.379  "offset_blocks": 655360,
00:06:59.379  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:06:59.379  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:06:59.379  "partition_name": "SPDK_TEST_second"
00:06:59.379  }
00:06:59.379  }
00:06:59.379  }
00:06:59.379  ]'
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]]
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]'
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63456
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63456 ']'
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63456
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:59.379    16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63456
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:59.379  killing process with pid 63456
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63456'
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63456
00:06:59.379   16:56:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63456
00:07:01.280  
00:07:01.280  real	0m2.997s
00:07:01.280  user	0m3.115s
00:07:01.280  sys	0m0.374s
00:07:01.280   16:56:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.280   16:56:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:07:01.280  ************************************
00:07:01.280  END TEST bdev_gpt_uuid
00:07:01.280  ************************************
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]]
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:07:01.280   16:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:07:01.280  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:07:01.538  Waiting for block devices as requested
00:07:01.538  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:07:01.538  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:07:01.538  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:07:01.810  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:07:07.073  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:07:07.073   16:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:07:07.073   16:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:07:07.073  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:07:07.073  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:07:07.073  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:07:07.073  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:07:07.074   16:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:07:07.074  
00:07:07.074  real	0m57.664s
00:07:07.074  user	1m12.878s
00:07:07.074  sys	0m8.174s
00:07:07.074   16:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:07.074   16:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:07:07.074  ************************************
00:07:07.074  END TEST blockdev_nvme_gpt
00:07:07.074  ************************************
00:07:07.074   16:56:29  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:07:07.074   16:56:29  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:07.074   16:56:29  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:07.074   16:56:29  -- common/autotest_common.sh@10 -- # set +x
00:07:07.074  ************************************
00:07:07.074  START TEST nvme
00:07:07.074  ************************************
00:07:07.074   16:56:29 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:07:07.074  * Looking for test storage...
00:07:07.074  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:07.074     16:56:30 nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:07:07.074     16:56:30 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:07.074    16:56:30 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:07.074    16:56:30 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:07.074    16:56:30 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:07.074    16:56:30 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:07:07.074    16:56:30 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:07:07.074    16:56:30 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:07:07.074    16:56:30 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:07:07.074    16:56:30 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:07:07.074    16:56:30 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:07:07.074    16:56:30 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:07:07.074    16:56:30 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:07.074    16:56:30 nvme -- scripts/common.sh@344 -- # case "$op" in
00:07:07.074    16:56:30 nvme -- scripts/common.sh@345 -- # : 1
00:07:07.074    16:56:30 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:07.074    16:56:30 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:07.074     16:56:30 nvme -- scripts/common.sh@365 -- # decimal 1
00:07:07.074     16:56:30 nvme -- scripts/common.sh@353 -- # local d=1
00:07:07.074     16:56:30 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:07.074     16:56:30 nvme -- scripts/common.sh@355 -- # echo 1
00:07:07.074    16:56:30 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:07:07.074     16:56:30 nvme -- scripts/common.sh@366 -- # decimal 2
00:07:07.074     16:56:30 nvme -- scripts/common.sh@353 -- # local d=2
00:07:07.074     16:56:30 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:07.074     16:56:30 nvme -- scripts/common.sh@355 -- # echo 2
00:07:07.074    16:56:30 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:07:07.074    16:56:30 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:07.074    16:56:30 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:07.074    16:56:30 nvme -- scripts/common.sh@368 -- # return 0
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:07.074  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.074  		--rc genhtml_branch_coverage=1
00:07:07.074  		--rc genhtml_function_coverage=1
00:07:07.074  		--rc genhtml_legend=1
00:07:07.074  		--rc geninfo_all_blocks=1
00:07:07.074  		--rc geninfo_unexecuted_blocks=1
00:07:07.074  		
00:07:07.074  		'
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:07.074  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.074  		--rc genhtml_branch_coverage=1
00:07:07.074  		--rc genhtml_function_coverage=1
00:07:07.074  		--rc genhtml_legend=1
00:07:07.074  		--rc geninfo_all_blocks=1
00:07:07.074  		--rc geninfo_unexecuted_blocks=1
00:07:07.074  		
00:07:07.074  		'
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:07.074  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.074  		--rc genhtml_branch_coverage=1
00:07:07.074  		--rc genhtml_function_coverage=1
00:07:07.074  		--rc genhtml_legend=1
00:07:07.074  		--rc geninfo_all_blocks=1
00:07:07.074  		--rc geninfo_unexecuted_blocks=1
00:07:07.074  		
00:07:07.074  		'
00:07:07.074    16:56:30 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:07.074  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.074  		--rc genhtml_branch_coverage=1
00:07:07.074  		--rc genhtml_function_coverage=1
00:07:07.074  		--rc genhtml_legend=1
00:07:07.074  		--rc geninfo_all_blocks=1
00:07:07.074  		--rc geninfo_unexecuted_blocks=1
00:07:07.074  		
00:07:07.074  		'
00:07:07.074   16:56:30 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:07:07.639  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:07:08.205  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:07:08.205  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:07:08.205  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:07:08.205  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:07:08.205    16:56:31 nvme -- nvme/nvme.sh@79 -- # uname
00:07:08.205   16:56:31 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:07:08.205   16:56:31 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:07:08.205   16:56:31 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1075 -- # stubpid=64091
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:07:08.205  Waiting for stub to ready for secondary processes...
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64091 ]]
00:07:08.205   16:56:31 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:07:08.205  [2024-12-09 16:56:31.094963] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:07:08.205  [2024-12-09 16:56:31.095083] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:07:09.139  [2024-12-09 16:56:31.841384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:09.139  [2024-12-09 16:56:31.934683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:09.139  [2024-12-09 16:56:31.935013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:09.139  [2024-12-09 16:56:31.935039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:09.139  [2024-12-09 16:56:31.948820] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:07:09.139  [2024-12-09 16:56:31.948874] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:07:09.139  [2024-12-09 16:56:31.959409] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:07:09.139  [2024-12-09 16:56:31.959492] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:07:09.139  [2024-12-09 16:56:31.960895] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:07:09.139  [2024-12-09 16:56:31.961024] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created
00:07:09.139  [2024-12-09 16:56:31.961065] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created
00:07:09.139  [2024-12-09 16:56:31.962454] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:07:09.139  [2024-12-09 16:56:31.962559] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created
00:07:09.139  [2024-12-09 16:56:31.962603] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created
00:07:09.139  [2024-12-09 16:56:31.964537] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:07:09.139  [2024-12-09 16:56:31.964764] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created
00:07:09.139  [2024-12-09 16:56:31.964831] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created
00:07:09.139  [2024-12-09 16:56:31.964890] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created
00:07:09.139  [2024-12-09 16:56:31.964929] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created
00:07:09.139  done.
00:07:09.139   16:56:32 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:07:09.139   16:56:32 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:07:09.139   16:56:32 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:07:09.139   16:56:32 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:07:09.139   16:56:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:09.139   16:56:32 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:09.139  ************************************
00:07:09.139  START TEST nvme_reset
00:07:09.139  ************************************
00:07:09.139   16:56:32 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:07:09.398  Initializing NVMe Controllers
00:07:09.398  Skipping QEMU NVMe SSD at 0000:00:10.0
00:07:09.398  Skipping QEMU NVMe SSD at 0000:00:11.0
00:07:09.398  Skipping QEMU NVMe SSD at 0000:00:13.0
00:07:09.398  Skipping QEMU NVMe SSD at 0000:00:12.0
00:07:09.398  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:07:09.398  
00:07:09.398  real	0m0.194s
00:07:09.398  user	0m0.074s
00:07:09.398  sys	0m0.081s
00:07:09.398   16:56:32 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:09.398   16:56:32 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:07:09.398  ************************************
00:07:09.398  END TEST nvme_reset
00:07:09.398  ************************************
00:07:09.398   16:56:32 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:07:09.398   16:56:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:09.398   16:56:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:09.398   16:56:32 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:09.398  ************************************
00:07:09.398  START TEST nvme_identify
00:07:09.398  ************************************
00:07:09.398   16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:07:09.398   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:07:09.398   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:07:09.398   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:07:09.398    16:56:32 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:07:09.398    16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:09.398    16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:07:09.398    16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:09.398     16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:09.398     16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:09.398    16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:07:09.398    16:56:32 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:07:09.398   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:07:09.660  [2024-12-09 16:56:32.517019] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64112 terminated unexpected
00:07:09.660  =====================================================
00:07:09.660  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:09.660  =====================================================
00:07:09.660  Controller Capabilities/Features
00:07:09.660  ================================
00:07:09.660  Vendor ID:                             1b36
00:07:09.660  Subsystem Vendor ID:                   1af4
00:07:09.660  Serial Number:                         12340
00:07:09.660  Model Number:                          QEMU NVMe Ctrl
00:07:09.660  Firmware Version:                      8.0.0
00:07:09.660  Recommended Arb Burst:                 6
00:07:09.660  IEEE OUI Identifier:                   00 54 52
00:07:09.660  Multi-path I/O
00:07:09.660    May have multiple subsystem ports:   No
00:07:09.660    May have multiple controllers:       No
00:07:09.660    Associated with SR-IOV VF:           No
00:07:09.660  Max Data Transfer Size:                524288
00:07:09.660  Max Number of Namespaces:              256
00:07:09.660  Max Number of I/O Queues:              64
00:07:09.660  NVMe Specification Version (VS):       1.4
00:07:09.660  NVMe Specification Version (Identify): 1.4
00:07:09.660  Maximum Queue Entries:                 2048
00:07:09.660  Contiguous Queues Required:            Yes
00:07:09.660  Arbitration Mechanisms Supported
00:07:09.660    Weighted Round Robin:                Not Supported
00:07:09.660    Vendor Specific:                     Not Supported
00:07:09.660  Reset Timeout:                         7500 ms
00:07:09.660  Doorbell Stride:                       4 bytes
00:07:09.661  NVM Subsystem Reset:                   Not Supported
00:07:09.661  Command Sets Supported
00:07:09.661    NVM Command Set:                     Supported
00:07:09.661  Boot Partition:                        Not Supported
00:07:09.661  Memory Page Size Minimum:              4096 bytes
00:07:09.661  Memory Page Size Maximum:              65536 bytes
00:07:09.661  Persistent Memory Region:              Not Supported
00:07:09.661  Optional Asynchronous Events Supported
00:07:09.661    Namespace Attribute Notices:         Supported
00:07:09.661    Firmware Activation Notices:         Not Supported
00:07:09.661    ANA Change Notices:                  Not Supported
00:07:09.661    PLE Aggregate Log Change Notices:    Not Supported
00:07:09.661    LBA Status Info Alert Notices:       Not Supported
00:07:09.661    EGE Aggregate Log Change Notices:    Not Supported
00:07:09.661    Normal NVM Subsystem Shutdown event: Not Supported
00:07:09.661    Zone Descriptor Change Notices:      Not Supported
00:07:09.661    Discovery Log Change Notices:        Not Supported
00:07:09.661  Controller Attributes
00:07:09.661    128-bit Host Identifier:             Not Supported
00:07:09.661    Non-Operational Permissive Mode:     Not Supported
00:07:09.661    NVM Sets:                            Not Supported
00:07:09.661    Read Recovery Levels:                Not Supported
00:07:09.661    Endurance Groups:                    Not Supported
00:07:09.661    Predictable Latency Mode:            Not Supported
00:07:09.661    Traffic Based Keep ALive:            Not Supported
00:07:09.661    Namespace Granularity:               Not Supported
00:07:09.661    SQ Associations:                     Not Supported
00:07:09.661    UUID List:                           Not Supported
00:07:09.661    Multi-Domain Subsystem:              Not Supported
00:07:09.661    Fixed Capacity Management:           Not Supported
00:07:09.661    Variable Capacity Management:        Not Supported
00:07:09.661    Delete Endurance Group:              Not Supported
00:07:09.661    Delete NVM Set:                      Not Supported
00:07:09.661    Extended LBA Formats Supported:      Supported
00:07:09.661    Flexible Data Placement Supported:   Not Supported
00:07:09.661  
00:07:09.661  Controller Memory Buffer Support
00:07:09.661  ================================
00:07:09.661  Supported:                             No
00:07:09.661  
00:07:09.661  Persistent Memory Region Support
00:07:09.661  ================================
00:07:09.661  Supported:                             No
00:07:09.661  
00:07:09.661  Admin Command Set Attributes
00:07:09.661  ============================
00:07:09.661  Security Send/Receive:                 Not Supported
00:07:09.661  Format NVM:                            Supported
00:07:09.661  Firmware Activate/Download:            Not Supported
00:07:09.661  Namespace Management:                  Supported
00:07:09.661  Device Self-Test:                      Not Supported
00:07:09.661  Directives:                            Supported
00:07:09.661  NVMe-MI:                               Not Supported
00:07:09.661  Virtualization Management:             Not Supported
00:07:09.661  Doorbell Buffer Config:                Supported
00:07:09.661  Get LBA Status Capability:             Not Supported
00:07:09.661  Command & Feature Lockdown Capability: Not Supported
00:07:09.661  Abort Command Limit:                   4
00:07:09.661  Async Event Request Limit:             4
00:07:09.661  Number of Firmware Slots:              N/A
00:07:09.661  Firmware Slot 1 Read-Only:             N/A
00:07:09.661  Firmware Activation Without Reset:     N/A
00:07:09.661  Multiple Update Detection Support:     N/A
00:07:09.661  Firmware Update Granularity:           No Information Provided
00:07:09.661  Per-Namespace SMART Log:               Yes
00:07:09.661  Asymmetric Namespace Access Log Page:  Not Supported
00:07:09.661  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:07:09.661  Command Effects Log Page:              Supported
00:07:09.661  Get Log Page Extended Data:            Supported
00:07:09.661  Telemetry Log Pages:                   Not Supported
00:07:09.661  Persistent Event Log Pages:            Not Supported
00:07:09.661  Supported Log Pages Log Page:          May Support
00:07:09.661  Commands Supported & Effects Log Page: Not Supported
00:07:09.661  Feature Identifiers & Effects Log Page:May Support
00:07:09.661  NVMe-MI Commands & Effects Log Page:   May Support
00:07:09.661  Data Area 4 for Telemetry Log:         Not Supported
00:07:09.661  Error Log Page Entries Supported:      1
00:07:09.661  Keep Alive:                            Not Supported
00:07:09.661  
00:07:09.661  NVM Command Set Attributes
00:07:09.661  ==========================
00:07:09.661  Submission Queue Entry Size
00:07:09.661    Max:                       64
00:07:09.661    Min:                       64
00:07:09.661  Completion Queue Entry Size
00:07:09.661    Max:                       16
00:07:09.661    Min:                       16
00:07:09.661  Number of Namespaces:        256
00:07:09.661  Compare Command:             Supported
00:07:09.661  Write Uncorrectable Command: Not Supported
00:07:09.661  Dataset Management Command:  Supported
00:07:09.661  Write Zeroes Command:        Supported
00:07:09.661  Set Features Save Field:     Supported
00:07:09.661  Reservations:                Not Supported
00:07:09.661  Timestamp:                   Supported
00:07:09.661  Copy:                        Supported
00:07:09.661  Volatile Write Cache:        Present
00:07:09.661  Atomic Write Unit (Normal):  1
00:07:09.661  Atomic Write Unit (PFail):   1
00:07:09.661  Atomic Compare & Write Unit: 1
00:07:09.661  Fused Compare & Write:       Not Supported
00:07:09.661  Scatter-Gather List
00:07:09.661    SGL Command Set:           Supported
00:07:09.661    SGL Keyed:                 Not Supported
00:07:09.661    SGL Bit Bucket Descriptor: Not Supported
00:07:09.661    SGL Metadata Pointer:      Not Supported
00:07:09.661    Oversized SGL:             Not Supported
00:07:09.661    SGL Metadata Address:      Not Supported
00:07:09.661    SGL Offset:                Not Supported
00:07:09.661    Transport SGL Data Block:  Not Supported
00:07:09.661  Replay Protected Memory Block:  Not Supported
00:07:09.661  
00:07:09.661  Firmware Slot Information
00:07:09.661  =========================
00:07:09.661  Active slot:                 1
00:07:09.661  Slot 1 Firmware Revision:    1.0
00:07:09.661  
00:07:09.661  
00:07:09.661  Commands Supported and Effects
00:07:09.661  ==============================
00:07:09.661  Admin Commands
00:07:09.661  --------------
00:07:09.661     Delete I/O Submission Queue (00h): Supported 
00:07:09.661     Create I/O Submission Queue (01h): Supported 
00:07:09.661                    Get Log Page (02h): Supported 
00:07:09.661     Delete I/O Completion Queue (04h): Supported 
00:07:09.661     Create I/O Completion Queue (05h): Supported 
00:07:09.661                        Identify (06h): Supported 
00:07:09.661                           Abort (08h): Supported 
00:07:09.661                    Set Features (09h): Supported 
00:07:09.661                    Get Features (0Ah): Supported 
00:07:09.661      Asynchronous Event Request (0Ch): Supported 
00:07:09.661            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:09.661                  Directive Send (19h): Supported 
00:07:09.661               Directive Receive (1Ah): Supported 
00:07:09.661       Virtualization Management (1Ch): Supported 
00:07:09.661          Doorbell Buffer Config (7Ch): Supported 
00:07:09.661                      Format NVM (80h): Supported LBA-Change 
00:07:09.661  I/O Commands
00:07:09.661  ------------
00:07:09.661                           Flush (00h): Supported LBA-Change 
00:07:09.661                           Write (01h): Supported LBA-Change 
00:07:09.661                            Read (02h): Supported 
00:07:09.661                         Compare (05h): Supported 
00:07:09.661                    Write Zeroes (08h): Supported LBA-Change 
00:07:09.661              Dataset Management (09h): Supported LBA-Change 
00:07:09.661                         Unknown (0Ch): Supported 
00:07:09.661                         Unknown (12h): Supported 
00:07:09.661                            Copy (19h): Supported LBA-Change 
00:07:09.661                         Unknown (1Dh): Supported LBA-Change 
00:07:09.661  
00:07:09.661  Error Log
00:07:09.661  =========
00:07:09.661  
00:07:09.661  Arbitration
00:07:09.661  ===========
00:07:09.661  Arbitration Burst:           no limit
00:07:09.661  
00:07:09.661  Power Management
00:07:09.661  ================
00:07:09.661  Number of Power States:          1
00:07:09.661  Current Power State:             Power State #0
00:07:09.661  Power State #0:
00:07:09.661    Max Power:                     25.00 W
00:07:09.661    Non-Operational State:         Operational
00:07:09.661    Entry Latency:                 16 microseconds
00:07:09.661    Exit Latency:                  4 microseconds
00:07:09.661    Relative Read Throughput:      0
00:07:09.661    Relative Read Latency:         0
00:07:09.661    Relative Write Throughput:     0
00:07:09.661    Relative Write Latency:        0
00:07:09.661    Idle Power[2024-12-09 16:56:32.518033] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64112 terminated unexpected
00:07:09.662  :                     Not Reported
00:07:09.662    Active Power:                   Not Reported
00:07:09.662  Non-Operational Permissive Mode: Not Supported
00:07:09.662  
00:07:09.662  Health Information
00:07:09.662  ==================
00:07:09.662  Critical Warnings:
00:07:09.662    Available Spare Space:     OK
00:07:09.662    Temperature:               OK
00:07:09.662    Device Reliability:        OK
00:07:09.662    Read Only:                 No
00:07:09.662    Volatile Memory Backup:    OK
00:07:09.662  Current Temperature:         323 Kelvin (50 Celsius)
00:07:09.662  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:09.662  Available Spare:             0%
00:07:09.662  Available Spare Threshold:   0%
00:07:09.662  Life Percentage Used:        0%
00:07:09.662  Data Units Read:             638
00:07:09.662  Data Units Written:          566
00:07:09.662  Host Read Commands:          34406
00:07:09.662  Host Write Commands:         34192
00:07:09.662  Controller Busy Time:        0 minutes
00:07:09.662  Power Cycles:                0
00:07:09.662  Power On Hours:              0 hours
00:07:09.662  Unsafe Shutdowns:            0
00:07:09.662  Unrecoverable Media Errors:  0
00:07:09.662  Lifetime Error Log Entries:  0
00:07:09.662  Warning Temperature Time:    0 minutes
00:07:09.662  Critical Temperature Time:   0 minutes
00:07:09.662  
00:07:09.662  Number of Queues
00:07:09.662  ================
00:07:09.662  Number of I/O Submission Queues:      64
00:07:09.662  Number of I/O Completion Queues:      64
00:07:09.662  
00:07:09.662  ZNS Specific Controller Data
00:07:09.662  ============================
00:07:09.662  Zone Append Size Limit:      0
00:07:09.662  
00:07:09.662  
00:07:09.662  Active Namespaces
00:07:09.662  =================
00:07:09.662  Namespace ID:1
00:07:09.662  Error Recovery Timeout:                Unlimited
00:07:09.662  Command Set Identifier:                NVM (00h)
00:07:09.662  Deallocate:                            Supported
00:07:09.662  Deallocated/Unwritten Error:           Supported
00:07:09.662  Deallocated Read Value:                All 0x00
00:07:09.662  Deallocate in Write Zeroes:            Not Supported
00:07:09.662  Deallocated Guard Field:               0xFFFF
00:07:09.662  Flush:                                 Supported
00:07:09.662  Reservation:                           Not Supported
00:07:09.662  Metadata Transferred as:               Separate Metadata Buffer
00:07:09.662  Namespace Sharing Capabilities:        Private
00:07:09.662  Size (in LBAs):                        1548666 (5GiB)
00:07:09.662  Capacity (in LBAs):                    1548666 (5GiB)
00:07:09.662  Utilization (in LBAs):                 1548666 (5GiB)
00:07:09.662  Thin Provisioning:                     Not Supported
00:07:09.662  Per-NS Atomic Units:                   No
00:07:09.662  Maximum Single Source Range Length:    128
00:07:09.662  Maximum Copy Length:                   128
00:07:09.662  Maximum Source Range Count:            128
00:07:09.662  NGUID/EUI64 Never Reused:              No
00:07:09.662  Namespace Write Protected:             No
00:07:09.662  Number of LBA Formats:                 8
00:07:09.662  Current LBA Format:                    LBA Format #07
00:07:09.662  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.662  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.662  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.662  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.662  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.662  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.662  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.662  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.662  
00:07:09.662  NVM Specific Namespace Data
00:07:09.662  ===========================
00:07:09.662  Logical Block Storage Tag Mask:               0
00:07:09.662  Protection Information Capabilities:
00:07:09.662    16b Guard Protection Information Storage Tag Support:  No
00:07:09.662    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.662    Storage Tag Check Read Support:                        No
00:07:09.662  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.662  =====================================================
00:07:09.662  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:09.662  =====================================================
00:07:09.662  Controller Capabilities/Features
00:07:09.662  ================================
00:07:09.662  Vendor ID:                             1b36
00:07:09.662  Subsystem Vendor ID:                   1af4
00:07:09.662  Serial Number:                         12341
00:07:09.662  Model Number:                          QEMU NVMe Ctrl
00:07:09.662  Firmware Version:                      8.0.0
00:07:09.662  Recommended Arb Burst:                 6
00:07:09.662  IEEE OUI Identifier:                   00 54 52
00:07:09.662  Multi-path I/O
00:07:09.662    May have multiple subsystem ports:   No
00:07:09.662    May have multiple controllers:       No
00:07:09.662    Associated with SR-IOV VF:           No
00:07:09.662  Max Data Transfer Size:                524288
00:07:09.662  Max Number of Namespaces:              256
00:07:09.662  Max Number of I/O Queues:              64
00:07:09.662  NVMe Specification Version (VS):       1.4
00:07:09.662  NVMe Specification Version (Identify): 1.4
00:07:09.662  Maximum Queue Entries:                 2048
00:07:09.662  Contiguous Queues Required:            Yes
00:07:09.662  Arbitration Mechanisms Supported
00:07:09.662    Weighted Round Robin:                Not Supported
00:07:09.662    Vendor Specific:                     Not Supported
00:07:09.662  Reset Timeout:                         7500 ms
00:07:09.662  Doorbell Stride:                       4 bytes
00:07:09.662  NVM Subsystem Reset:                   Not Supported
00:07:09.662  Command Sets Supported
00:07:09.662    NVM Command Set:                     Supported
00:07:09.662  Boot Partition:                        Not Supported
00:07:09.662  Memory Page Size Minimum:              4096 bytes
00:07:09.662  Memory Page Size Maximum:              65536 bytes
00:07:09.662  Persistent Memory Region:              Not Supported
00:07:09.662  Optional Asynchronous Events Supported
00:07:09.662    Namespace Attribute Notices:         Supported
00:07:09.662    Firmware Activation Notices:         Not Supported
00:07:09.662    ANA Change Notices:                  Not Supported
00:07:09.662    PLE Aggregate Log Change Notices:    Not Supported
00:07:09.662    LBA Status Info Alert Notices:       Not Supported
00:07:09.662    EGE Aggregate Log Change Notices:    Not Supported
00:07:09.662    Normal NVM Subsystem Shutdown event: Not Supported
00:07:09.662    Zone Descriptor Change Notices:      Not Supported
00:07:09.662    Discovery Log Change Notices:        Not Supported
00:07:09.662  Controller Attributes
00:07:09.662    128-bit Host Identifier:             Not Supported
00:07:09.662    Non-Operational Permissive Mode:     Not Supported
00:07:09.662    NVM Sets:                            Not Supported
00:07:09.662    Read Recovery Levels:                Not Supported
00:07:09.662    Endurance Groups:                    Not Supported
00:07:09.662    Predictable Latency Mode:            Not Supported
00:07:09.662    Traffic Based Keep ALive:            Not Supported
00:07:09.662    Namespace Granularity:               Not Supported
00:07:09.662    SQ Associations:                     Not Supported
00:07:09.662    UUID List:                           Not Supported
00:07:09.662    Multi-Domain Subsystem:              Not Supported
00:07:09.662    Fixed Capacity Management:           Not Supported
00:07:09.662    Variable Capacity Management:        Not Supported
00:07:09.662    Delete Endurance Group:              Not Supported
00:07:09.662    Delete NVM Set:                      Not Supported
00:07:09.662    Extended LBA Formats Supported:      Supported
00:07:09.662    Flexible Data Placement Supported:   Not Supported
00:07:09.662  
00:07:09.662  Controller Memory Buffer Support
00:07:09.662  ================================
00:07:09.662  Supported:                             No
00:07:09.662  
00:07:09.662  Persistent Memory Region Support
00:07:09.662  ================================
00:07:09.662  Supported:                             No
00:07:09.662  
00:07:09.662  Admin Command Set Attributes
00:07:09.662  ============================
00:07:09.662  Security Send/Receive:                 Not Supported
00:07:09.662  Format NVM:                            Supported
00:07:09.662  Firmware Activate/Download:            Not Supported
00:07:09.663  Namespace Management:                  Supported
00:07:09.663  Device Self-Test:                      Not Supported
00:07:09.663  Directives:                            Supported
00:07:09.663  NVMe-MI:                               Not Supported
00:07:09.663  Virtualization Management:             Not Supported
00:07:09.663  Doorbell Buffer Config:                Supported
00:07:09.663  Get LBA Status Capability:             Not Supported
00:07:09.663  Command & Feature Lockdown Capability: Not Supported
00:07:09.663  Abort Command Limit:                   4
00:07:09.663  Async Event Request Limit:             4
00:07:09.663  Number of Firmware Slots:              N/A
00:07:09.663  Firmware Slot 1 Read-Only:             N/A
00:07:09.663  Firmware Activation Without Reset:     N/A
00:07:09.663  Multiple Update Detection Support:     N/A
00:07:09.663  Firmware Update Granularity:           No Information Provided
00:07:09.663  Per-Namespace SMART Log:               Yes
00:07:09.663  Asymmetric Namespace Access Log Page:  Not Supported
00:07:09.663  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:07:09.663  Command Effects Log Page:              Supported
00:07:09.663  Get Log Page Extended Data:            Supported
00:07:09.663  Telemetry Log Pages:                   Not Supported
00:07:09.663  Persistent Event Log Pages:            Not Supported
00:07:09.663  Supported Log Pages Log Page:          May Support
00:07:09.663  Commands Supported & Effects Log Page: Not Supported
00:07:09.663  Feature Identifiers & Effects Log Page:May Support
00:07:09.663  NVMe-MI Commands & Effects Log Page:   May Support
00:07:09.663  Data Area 4 for Telemetry Log:         Not Supported
00:07:09.663  Error Log Page Entries Supported:      1
00:07:09.663  Keep Alive:                            Not Supported
00:07:09.663  
00:07:09.663  NVM Command Set Attributes
00:07:09.663  ==========================
00:07:09.663  Submission Queue Entry Size
00:07:09.663    Max:                       64
00:07:09.663    Min:                       64
00:07:09.663  Completion Queue Entry Size
00:07:09.663    Max:                       16
00:07:09.663    Min:                       16
00:07:09.663  Number of Namespaces:        256
00:07:09.663  Compare Command:             Supported
00:07:09.663  Write Uncorrectable Command: Not Supported
00:07:09.663  Dataset Management Command:  Supported
00:07:09.663  Write Zeroes Command:        Supported
00:07:09.663  Set Features Save Field:     Supported
00:07:09.663  Reservations:                Not Supported
00:07:09.663  Timestamp:                   Supported
00:07:09.663  Copy:                        Supported
00:07:09.663  Volatile Write Cache:        Present
00:07:09.663  Atomic Write Unit (Normal):  1
00:07:09.663  Atomic Write Unit (PFail):   1
00:07:09.663  Atomic Compare & Write Unit: 1
00:07:09.663  Fused Compare & Write:       Not Supported
00:07:09.663  Scatter-Gather List
00:07:09.663    SGL Command Set:           Supported
00:07:09.663    SGL Keyed:                 Not Supported
00:07:09.663    SGL Bit Bucket Descriptor: Not Supported
00:07:09.663    SGL Metadata Pointer:      Not Supported
00:07:09.663    Oversized SGL:             Not Supported
00:07:09.663    SGL Metadata Address:      Not Supported
00:07:09.663    SGL Offset:                Not Supported
00:07:09.663    Transport SGL Data Block:  Not Supported
00:07:09.663  Replay Protected Memory Block:  Not Supported
00:07:09.663  
00:07:09.663  Firmware Slot Information
00:07:09.663  =========================
00:07:09.663  Active slot:                 1
00:07:09.663  Slot 1 Firmware Revision:    1.0
00:07:09.663  
00:07:09.663  
00:07:09.663  Commands Supported and Effects
00:07:09.663  ==============================
00:07:09.663  Admin Commands
00:07:09.663  --------------
00:07:09.663     Delete I/O Submission Queue (00h): Supported 
00:07:09.663     Create I/O Submission Queue (01h): Supported 
00:07:09.663                    Get Log Page (02h): Supported 
00:07:09.663     Delete I/O Completion Queue (04h): Supported 
00:07:09.663     Create I/O Completion Queue (05h): Supported 
00:07:09.663                        Identify (06h): Supported 
00:07:09.663                           Abort (08h): Supported 
00:07:09.663                    Set Features (09h): Supported 
00:07:09.663                    Get Features (0Ah): Supported 
00:07:09.663      Asynchronous Event Request (0Ch): Supported 
00:07:09.663            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:09.663                  Directive Send (19h): Supported 
00:07:09.663               Directive Receive (1Ah): Supported 
00:07:09.663       Virtualization Management (1Ch): Supported 
00:07:09.663          Doorbell Buffer Config (7Ch): Supported 
00:07:09.663                      Format NVM (80h): Supported LBA-Change 
00:07:09.663  I/O Commands
00:07:09.663  ------------
00:07:09.663                           Flush (00h): Supported LBA-Change 
00:07:09.663                           Write (01h): Supported LBA-Change 
00:07:09.663                            Read (02h): Supported 
00:07:09.663                         Compare (05h): Supported 
00:07:09.663                    Write Zeroes (08h): Supported LBA-Change 
00:07:09.663              Dataset Management (09h): Supported LBA-Change 
00:07:09.663                         Unknown (0Ch): Supported 
00:07:09.663                         Unknown (12h): Supported 
00:07:09.663                            Copy (19h): Supported LBA-Change 
00:07:09.663                         Unknown (1Dh): Supported LBA-Change 
00:07:09.663  
00:07:09.663  Error Log
00:07:09.663  =========
00:07:09.663  
00:07:09.663  Arbitration
00:07:09.663  ===========
00:07:09.663  Arbitration Burst:           no limit
00:07:09.663  
00:07:09.663  Power Management
00:07:09.663  ================
00:07:09.663  Number of Power States:          1
00:07:09.663  Current Power State:             Power State #0
00:07:09.663  Power State #0:
00:07:09.663    Max Power:                     25.00 W
00:07:09.663    Non-Operational State:         Operational
00:07:09.663    Entry Latency:                 16 microseconds
00:07:09.663    Exit Latency:                  4 microseconds
00:07:09.663    Relative Read Throughput:      0
00:07:09.663    Relative Read Latency:         0
00:07:09.663    Relative Write Throughput:     0
00:07:09.663    Relative Write Latency:        0
00:07:09.663    Idle Power:                     Not Reported
00:07:09.663    Active Power:                   Not Reported
00:07:09.663  Non-Operational Permissive Mode: Not Supported
00:07:09.663  
00:07:09.663  Health Information
00:07:09.663  ==================
00:07:09.663  Critical Warnings:
00:07:09.663    Available Spare Space:     OK
00:07:09.663    Temperature:      [2024-12-09 16:56:32.518545] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64112 terminated unexpected
00:07:09.663           OK
00:07:09.663    Device Reliability:        OK
00:07:09.663    Read Only:                 No
00:07:09.663    Volatile Memory Backup:    OK
00:07:09.663  Current Temperature:         323 Kelvin (50 Celsius)
00:07:09.663  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:09.663  Available Spare:             0%
00:07:09.663  Available Spare Threshold:   0%
00:07:09.663  Life Percentage Used:        0%
00:07:09.663  Data Units Read:             954
00:07:09.663  Data Units Written:          821
00:07:09.663  Host Read Commands:          52212
00:07:09.663  Host Write Commands:         51001
00:07:09.663  Controller Busy Time:        0 minutes
00:07:09.663  Power Cycles:                0
00:07:09.663  Power On Hours:              0 hours
00:07:09.663  Unsafe Shutdowns:            0
00:07:09.663  Unrecoverable Media Errors:  0
00:07:09.663  Lifetime Error Log Entries:  0
00:07:09.663  Warning Temperature Time:    0 minutes
00:07:09.663  Critical Temperature Time:   0 minutes
00:07:09.663  
00:07:09.663  Number of Queues
00:07:09.663  ================
00:07:09.663  Number of I/O Submission Queues:      64
00:07:09.663  Number of I/O Completion Queues:      64
00:07:09.663  
00:07:09.663  ZNS Specific Controller Data
00:07:09.663  ============================
00:07:09.663  Zone Append Size Limit:      0
00:07:09.663  
00:07:09.663  
00:07:09.663  Active Namespaces
00:07:09.663  =================
00:07:09.663  Namespace ID:1
00:07:09.663  Error Recovery Timeout:                Unlimited
00:07:09.663  Command Set Identifier:                NVM (00h)
00:07:09.663  Deallocate:                            Supported
00:07:09.663  Deallocated/Unwritten Error:           Supported
00:07:09.663  Deallocated Read Value:                All 0x00
00:07:09.663  Deallocate in Write Zeroes:            Not Supported
00:07:09.663  Deallocated Guard Field:               0xFFFF
00:07:09.663  Flush:                                 Supported
00:07:09.663  Reservation:                           Not Supported
00:07:09.663  Namespace Sharing Capabilities:        Private
00:07:09.663  Size (in LBAs):                        1310720 (5GiB)
00:07:09.663  Capacity (in LBAs):                    1310720 (5GiB)
00:07:09.663  Utilization (in LBAs):                 1310720 (5GiB)
00:07:09.663  Thin Provisioning:                     Not Supported
00:07:09.663  Per-NS Atomic Units:                   No
00:07:09.664  Maximum Single Source Range Length:    128
00:07:09.664  Maximum Copy Length:                   128
00:07:09.664  Maximum Source Range Count:            128
00:07:09.664  NGUID/EUI64 Never Reused:              No
00:07:09.664  Namespace Write Protected:             No
00:07:09.664  Number of LBA Formats:                 8
00:07:09.664  Current LBA Format:                    LBA Format #04
00:07:09.664  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.664  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.664  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.664  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.664  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.664  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.664  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.664  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.664  
00:07:09.664  NVM Specific Namespace Data
00:07:09.664  ===========================
00:07:09.664  Logical Block Storage Tag Mask:               0
00:07:09.664  Protection Information Capabilities:
00:07:09.664    16b Guard Protection Information Storage Tag Support:  No
00:07:09.664    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.664    Storage Tag Check Read Support:                        No
00:07:09.664  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.664  =====================================================
00:07:09.664  NVMe Controller at 0000:00:13.0 [1b36:0010]
00:07:09.664  =====================================================
00:07:09.664  Controller Capabilities/Features
00:07:09.664  ================================
00:07:09.664  Vendor ID:                             1b36
00:07:09.664  Subsystem Vendor ID:                   1af4
00:07:09.664  Serial Number:                         12343
00:07:09.664  Model Number:                          QEMU NVMe Ctrl
00:07:09.664  Firmware Version:                      8.0.0
00:07:09.664  Recommended Arb Burst:                 6
00:07:09.664  IEEE OUI Identifier:                   00 54 52
00:07:09.664  Multi-path I/O
00:07:09.664    May have multiple subsystem ports:   No
00:07:09.664    May have multiple controllers:       Yes
00:07:09.664    Associated with SR-IOV VF:           No
00:07:09.664  Max Data Transfer Size:                524288
00:07:09.664  Max Number of Namespaces:              256
00:07:09.664  Max Number of I/O Queues:              64
00:07:09.664  NVMe Specification Version (VS):       1.4
00:07:09.664  NVMe Specification Version (Identify): 1.4
00:07:09.664  Maximum Queue Entries:                 2048
00:07:09.664  Contiguous Queues Required:            Yes
00:07:09.664  Arbitration Mechanisms Supported
00:07:09.664    Weighted Round Robin:                Not Supported
00:07:09.664    Vendor Specific:                     Not Supported
00:07:09.664  Reset Timeout:                         7500 ms
00:07:09.664  Doorbell Stride:                       4 bytes
00:07:09.664  NVM Subsystem Reset:                   Not Supported
00:07:09.664  Command Sets Supported
00:07:09.664    NVM Command Set:                     Supported
00:07:09.664  Boot Partition:                        Not Supported
00:07:09.664  Memory Page Size Minimum:              4096 bytes
00:07:09.664  Memory Page Size Maximum:              65536 bytes
00:07:09.664  Persistent Memory Region:              Not Supported
00:07:09.664  Optional Asynchronous Events Supported
00:07:09.664    Namespace Attribute Notices:         Supported
00:07:09.664    Firmware Activation Notices:         Not Supported
00:07:09.664    ANA Change Notices:                  Not Supported
00:07:09.664    PLE Aggregate Log Change Notices:    Not Supported
00:07:09.664    LBA Status Info Alert Notices:       Not Supported
00:07:09.664    EGE Aggregate Log Change Notices:    Not Supported
00:07:09.664    Normal NVM Subsystem Shutdown event: Not Supported
00:07:09.664    Zone Descriptor Change Notices:      Not Supported
00:07:09.664    Discovery Log Change Notices:        Not Supported
00:07:09.664  Controller Attributes
00:07:09.664    128-bit Host Identifier:             Not Supported
00:07:09.664    Non-Operational Permissive Mode:     Not Supported
00:07:09.664    NVM Sets:                            Not Supported
00:07:09.664    Read Recovery Levels:                Not Supported
00:07:09.664    Endurance Groups:                    Supported
00:07:09.664    Predictable Latency Mode:            Not Supported
00:07:09.664    Traffic Based Keep ALive:            Not Supported
00:07:09.664    Namespace Granularity:               Not Supported
00:07:09.664    SQ Associations:                     Not Supported
00:07:09.664    UUID List:                           Not Supported
00:07:09.664    Multi-Domain Subsystem:              Not Supported
00:07:09.664    Fixed Capacity Management:           Not Supported
00:07:09.664    Variable Capacity Management:        Not Supported
00:07:09.664    Delete Endurance Group:              Not Supported
00:07:09.664    Delete NVM Set:                      Not Supported
00:07:09.664    Extended LBA Formats Supported:      Supported
00:07:09.664    Flexible Data Placement Supported:   Supported
00:07:09.664  
00:07:09.664  Controller Memory Buffer Support
00:07:09.664  ================================
00:07:09.664  Supported:                             No
00:07:09.664  
00:07:09.664  Persistent Memory Region Support
00:07:09.664  ================================
00:07:09.664  Supported:                             No
00:07:09.664  
00:07:09.664  Admin Command Set Attributes
00:07:09.664  ============================
00:07:09.664  Security Send/Receive:                 Not Supported
00:07:09.664  Format NVM:                            Supported
00:07:09.664  Firmware Activate/Download:            Not Supported
00:07:09.664  Namespace Management:                  Supported
00:07:09.664  Device Self-Test:                      Not Supported
00:07:09.664  Directives:                            Supported
00:07:09.664  NVMe-MI:                               Not Supported
00:07:09.664  Virtualization Management:             Not Supported
00:07:09.664  Doorbell Buffer Config:                Supported
00:07:09.664  Get LBA Status Capability:             Not Supported
00:07:09.664  Command & Feature Lockdown Capability: Not Supported
00:07:09.664  Abort Command Limit:                   4
00:07:09.664  Async Event Request Limit:             4
00:07:09.664  Number of Firmware Slots:              N/A
00:07:09.664  Firmware Slot 1 Read-Only:             N/A
00:07:09.664  Firmware Activation Without Reset:     N/A
00:07:09.664  Multiple Update Detection Support:     N/A
00:07:09.664  Firmware Update Granularity:           No Information Provided
00:07:09.664  Per-Namespace SMART Log:               Yes
00:07:09.664  Asymmetric Namespace Access Log Page:  Not Supported
00:07:09.664  Subsystem NQN:                         nqn.2019-08.org.qemu:fdp-subsys3
00:07:09.664  Command Effects Log Page:              Supported
00:07:09.664  Get Log Page Extended Data:            Supported
00:07:09.664  Telemetry Log Pages:                   Not Supported
00:07:09.664  Persistent Event Log Pages:            Not Supported
00:07:09.664  Supported Log Pages Log Page:          May Support
00:07:09.664  Commands Supported & Effects Log Page: Not Supported
00:07:09.664  Feature Identifiers & Effects Log Page:May Support
00:07:09.664  NVMe-MI Commands & Effects Log Page:   May Support
00:07:09.664  Data Area 4 for Telemetry Log:         Not Supported
00:07:09.664  Error Log Page Entries Supported:      1
00:07:09.664  Keep Alive:                            Not Supported
00:07:09.664  
00:07:09.664  NVM Command Set Attributes
00:07:09.664  ==========================
00:07:09.664  Submission Queue Entry Size
00:07:09.664    Max:                       64
00:07:09.664    Min:                       64
00:07:09.664  Completion Queue Entry Size
00:07:09.664    Max:                       16
00:07:09.664    Min:                       16
00:07:09.664  Number of Namespaces:        256
00:07:09.664  Compare Command:             Supported
00:07:09.664  Write Uncorrectable Command: Not Supported
00:07:09.664  Dataset Management Command:  Supported
00:07:09.664  Write Zeroes Command:        Supported
00:07:09.664  Set Features Save Field:     Supported
00:07:09.664  Reservations:                Not Supported
00:07:09.664  Timestamp:                   Supported
00:07:09.664  Copy:                        Supported
00:07:09.664  Volatile Write Cache:        Present
00:07:09.664  Atomic Write Unit (Normal):  1
00:07:09.664  Atomic Write Unit (PFail):   1
00:07:09.664  Atomic Compare & Write Unit: 1
00:07:09.664  Fused Compare & Write:       Not Supported
00:07:09.664  Scatter-Gather List
00:07:09.664    SGL Command Set:           Supported
00:07:09.664    SGL Keyed:                 Not Supported
00:07:09.664    SGL Bit Bucket Descriptor: Not Supported
00:07:09.665    SGL Metadata Pointer:      Not Supported
00:07:09.665    Oversized SGL:             Not Supported
00:07:09.665    SGL Metadata Address:      Not Supported
00:07:09.665    SGL Offset:                Not Supported
00:07:09.665    Transport SGL Data Block:  Not Supported
00:07:09.665  Replay Protected Memory Block:  Not Supported
00:07:09.665  
00:07:09.665  Firmware Slot Information
00:07:09.665  =========================
00:07:09.665  Active slot:                 1
00:07:09.665  Slot 1 Firmware Revision:    1.0
00:07:09.665  
00:07:09.665  
00:07:09.665  Commands Supported and Effects
00:07:09.665  ==============================
00:07:09.665  Admin Commands
00:07:09.665  --------------
00:07:09.665     Delete I/O Submission Queue (00h): Supported 
00:07:09.665     Create I/O Submission Queue (01h): Supported 
00:07:09.665                    Get Log Page (02h): Supported 
00:07:09.665     Delete I/O Completion Queue (04h): Supported 
00:07:09.665     Create I/O Completion Queue (05h): Supported 
00:07:09.665                        Identify (06h): Supported 
00:07:09.665                           Abort (08h): Supported 
00:07:09.665                    Set Features (09h): Supported 
00:07:09.665                    Get Features (0Ah): Supported 
00:07:09.665      Asynchronous Event Request (0Ch): Supported 
00:07:09.665            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:09.665                  Directive Send (19h): Supported 
00:07:09.665               Directive Receive (1Ah): Supported 
00:07:09.665       Virtualization Management (1Ch): Supported 
00:07:09.665          Doorbell Buffer Config (7Ch): Supported 
00:07:09.665                      Format NVM (80h): Supported LBA-Change 
00:07:09.665  I/O Commands
00:07:09.665  ------------
00:07:09.665                           Flush (00h): Supported LBA-Change 
00:07:09.665                           Write (01h): Supported LBA-Change 
00:07:09.665                            Read (02h): Supported 
00:07:09.665                         Compare (05h): Supported 
00:07:09.665                    Write Zeroes (08h): Supported LBA-Change 
00:07:09.665              Dataset Management (09h): Supported LBA-Change 
00:07:09.665                         Unknown (0Ch): Supported 
00:07:09.665                         Unknown (12h): Supported 
00:07:09.665                            Copy (19h): Supported LBA-Change 
00:07:09.665                         Unknown (1Dh): Supported LBA-Change 
00:07:09.665  
00:07:09.665  Error Log
00:07:09.665  =========
00:07:09.665  
00:07:09.665  Arbitration
00:07:09.665  ===========
00:07:09.665  Arbitration Burst:           no limit
00:07:09.665  
00:07:09.665  Power Management
00:07:09.665  ================
00:07:09.665  Number of Power States:          1
00:07:09.665  Current Power State:             Power State #0
00:07:09.665  Power State #0:
00:07:09.665    Max Power:                     25.00 W
00:07:09.665    Non-Operational State:         Operational
00:07:09.665    Entry Latency:                 16 microseconds
00:07:09.665    Exit Latency:                  4 microseconds
00:07:09.665    Relative Read Throughput:      0
00:07:09.665    Relative Read Latency:         0
00:07:09.665    Relative Write Throughput:     0
00:07:09.665    Relative Write Latency:        0
00:07:09.665    Idle Power:                     Not Reported
00:07:09.665    Active Power:                   Not Reported
00:07:09.665  Non-Operational Permissive Mode: Not Supported
00:07:09.665  
00:07:09.665  Health Information
00:07:09.665  ==================
00:07:09.665  Critical Warnings:
00:07:09.665    Available Spare Space:     OK
00:07:09.665    Temperature:               OK
00:07:09.665    Device Reliability:        OK
00:07:09.665    Read Only:                 No
00:07:09.665    Volatile Memory Backup:    OK
00:07:09.665  Current Temperature:         323 Kelvin (50 Celsius)
00:07:09.665  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:09.665  Available Spare:             0%
00:07:09.665  Available Spare Threshold:   0%
00:07:09.665  Life Percentage Used:        0%
00:07:09.665  Data Units Read:             774
00:07:09.665  Data Units Written:          703
00:07:09.665  Host Read Commands:          35976
00:07:09.665  Host Write Commands:         35399
00:07:09.665  Controller Busy Time:        0 minutes
00:07:09.665  Power Cycles:                0
00:07:09.665  Power On Hours:              0 hours
00:07:09.665  Unsafe Shutdowns:            0
00:07:09.665  Unrecoverable Media Errors:  0
00:07:09.665  Lifetime Error Log Entries:  0
00:07:09.665  Warning Temperature Time:    0 minutes
00:07:09.665  Critical Temperature Time:   0 minutes
00:07:09.665  
00:07:09.665  Number of Queues
00:07:09.665  ================
00:07:09.665  Number of I/O Submission Queues:      64
00:07:09.665  Number of I/O Completion Queues:      64
00:07:09.665  
00:07:09.665  ZNS Specific Controller Data
00:07:09.665  ============================
00:07:09.665  Zone Append Size Limit:      0
00:07:09.665  
00:07:09.665  
00:07:09.665  Active Namespaces
00:07:09.665  =================
00:07:09.665  Namespace ID:1
00:07:09.665  Error Recovery Timeout:                Unlimited
00:07:09.665  Command Set Identifier:                NVM (00h)
00:07:09.665  Deallocate:                            Supported
00:07:09.665  Deallocated/Unwritten Error:           Supported
00:07:09.665  Deallocated Read Value:                All 0x00
00:07:09.665  Deallocate in Write Zeroes:            Not Supported
00:07:09.665  Deallocated Guard Field:               0xFFFF
00:07:09.665  Flush:                                 Supported
00:07:09.665  Reservation:                           Not Supported
00:07:09.665  Namespace Sharing Capabilities:        Multiple Controllers
00:07:09.665  Size (in LBAs):                        262144 (1GiB)
00:07:09.665  Capacity (in LBAs):                    262144 (1GiB)
00:07:09.665  Utilization (in LBAs):                 262144 (1GiB)
00:07:09.665  Thin Provisioning:                     Not Supported
00:07:09.665  Per-NS Atomic Units:                   No
00:07:09.665  Maximum Single Source Range Length:    128
00:07:09.665  Maximum Copy Length:                   128
00:07:09.665  Maximum Source Range Count:            128
00:07:09.665  NGUID/EUI64 Never Reused:              No
00:07:09.665  Namespace Write Protected:             No
00:07:09.665  Endurance group ID:                    1
00:07:09.665  Number of LBA Formats:                 8
00:07:09.665  Current LBA Format:                    LBA Format #04
00:07:09.665  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.665  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.665  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.665  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.665  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.665  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.665  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.665  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.665  
00:07:09.665  Get Feature FDP:
00:07:09.665  ================
00:07:09.665    Enabled:                 Yes
00:07:09.665    FDP configuration index: 0
00:07:09.665  
00:07:09.665  FDP configurations log page
00:07:09.665  ===========================
00:07:09.665  Number of FDP configurations:         1
00:07:09.665  Version:                              0
00:07:09.665  Size:                                 112
00:07:09.665  FDP Configuration Descriptor:         0
00:07:09.665    Descriptor Size:                    96
00:07:09.665    Reclaim Group Identifier format:    2
00:07:09.665    FDP Volatile Write Cache:           Not Present
00:07:09.665    FDP Configuration:                  Valid
00:07:09.665    Vendor Specific Size:               0
00:07:09.665    Number of Reclaim Groups:           2
00:07:09.665    Number of Recalim Unit Handles:     8
00:07:09.665    Max Placement Identifiers:          128
00:07:09.665    Number of Namespaces Suppprted:     256
00:07:09.665    Reclaim unit Nominal Size:          6000000 bytes
00:07:09.665    Estimated Reclaim Unit Time Limit:  Not Reported
00:07:09.665      RUH Desc #000:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #001:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #002:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #003:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #004:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #005:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #006:          RUH Type: Initially Isolated
00:07:09.665      RUH Desc #007:          RUH Type: Initially Isolated
00:07:09.665  
00:07:09.665  FDP reclaim unit handle usage log page
00:07:09.665  ======================================
00:07:09.665  Number of Reclaim Unit Handles:       8
00:07:09.665    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:07:09.665    RUH Usage Desc #001:   RUH Attributes: Unused
00:07:09.665    RUH Usage Desc #002:   RUH Attributes: Unused
00:07:09.665    RUH Usage Desc #003:   RUH Attributes: Unused
00:07:09.665    RUH Usage Desc #004:   RUH Attributes: Unused
00:07:09.666    RUH Usage Desc #005:   RUH Attributes: Unused
00:07:09.666    RUH Usage Desc #006:   RUH Attributes: Unused
00:07:09.666    RUH Usage Desc #007:   RUH Attributes: Unused
00:07:09.666  
00:07:09.666  FDP statistics log page
00:07:09.666  =======================
00:07:09.666  Host bytes with metadata written:  439463936
00:07:09.666  Media[2024-12-09 16:56:32.519735] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64112 terminated unexpected
00:07:09.666   bytes with metadata written: 439500800
00:07:09.666  Media bytes erased:                0
00:07:09.666  
00:07:09.666  FDP events log page
00:07:09.666  ===================
00:07:09.666  Number of FDP events:              0
00:07:09.666  
00:07:09.666  NVM Specific Namespace Data
00:07:09.666  ===========================
00:07:09.666  Logical Block Storage Tag Mask:               0
00:07:09.666  Protection Information Capabilities:
00:07:09.666    16b Guard Protection Information Storage Tag Support:  No
00:07:09.666    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.666    Storage Tag Check Read Support:                        No
00:07:09.666  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.666  =====================================================
00:07:09.666  NVMe Controller at 0000:00:12.0 [1b36:0010]
00:07:09.666  =====================================================
00:07:09.666  Controller Capabilities/Features
00:07:09.666  ================================
00:07:09.666  Vendor ID:                             1b36
00:07:09.666  Subsystem Vendor ID:                   1af4
00:07:09.666  Serial Number:                         12342
00:07:09.666  Model Number:                          QEMU NVMe Ctrl
00:07:09.666  Firmware Version:                      8.0.0
00:07:09.666  Recommended Arb Burst:                 6
00:07:09.666  IEEE OUI Identifier:                   00 54 52
00:07:09.666  Multi-path I/O
00:07:09.666    May have multiple subsystem ports:   No
00:07:09.666    May have multiple controllers:       No
00:07:09.666    Associated with SR-IOV VF:           No
00:07:09.666  Max Data Transfer Size:                524288
00:07:09.666  Max Number of Namespaces:              256
00:07:09.666  Max Number of I/O Queues:              64
00:07:09.666  NVMe Specification Version (VS):       1.4
00:07:09.666  NVMe Specification Version (Identify): 1.4
00:07:09.666  Maximum Queue Entries:                 2048
00:07:09.666  Contiguous Queues Required:            Yes
00:07:09.666  Arbitration Mechanisms Supported
00:07:09.666    Weighted Round Robin:                Not Supported
00:07:09.666    Vendor Specific:                     Not Supported
00:07:09.666  Reset Timeout:                         7500 ms
00:07:09.666  Doorbell Stride:                       4 bytes
00:07:09.666  NVM Subsystem Reset:                   Not Supported
00:07:09.666  Command Sets Supported
00:07:09.666    NVM Command Set:                     Supported
00:07:09.666  Boot Partition:                        Not Supported
00:07:09.666  Memory Page Size Minimum:              4096 bytes
00:07:09.666  Memory Page Size Maximum:              65536 bytes
00:07:09.666  Persistent Memory Region:              Not Supported
00:07:09.666  Optional Asynchronous Events Supported
00:07:09.666    Namespace Attribute Notices:         Supported
00:07:09.666    Firmware Activation Notices:         Not Supported
00:07:09.666    ANA Change Notices:                  Not Supported
00:07:09.666    PLE Aggregate Log Change Notices:    Not Supported
00:07:09.666    LBA Status Info Alert Notices:       Not Supported
00:07:09.666    EGE Aggregate Log Change Notices:    Not Supported
00:07:09.666    Normal NVM Subsystem Shutdown event: Not Supported
00:07:09.666    Zone Descriptor Change Notices:      Not Supported
00:07:09.666    Discovery Log Change Notices:        Not Supported
00:07:09.666  Controller Attributes
00:07:09.666    128-bit Host Identifier:             Not Supported
00:07:09.666    Non-Operational Permissive Mode:     Not Supported
00:07:09.666    NVM Sets:                            Not Supported
00:07:09.666    Read Recovery Levels:                Not Supported
00:07:09.666    Endurance Groups:                    Not Supported
00:07:09.666    Predictable Latency Mode:            Not Supported
00:07:09.666    Traffic Based Keep ALive:            Not Supported
00:07:09.666    Namespace Granularity:               Not Supported
00:07:09.666    SQ Associations:                     Not Supported
00:07:09.666    UUID List:                           Not Supported
00:07:09.666    Multi-Domain Subsystem:              Not Supported
00:07:09.666    Fixed Capacity Management:           Not Supported
00:07:09.666    Variable Capacity Management:        Not Supported
00:07:09.666    Delete Endurance Group:              Not Supported
00:07:09.666    Delete NVM Set:                      Not Supported
00:07:09.666    Extended LBA Formats Supported:      Supported
00:07:09.666    Flexible Data Placement Supported:   Not Supported
00:07:09.666  
00:07:09.666  Controller Memory Buffer Support
00:07:09.666  ================================
00:07:09.666  Supported:                             No
00:07:09.666  
00:07:09.666  Persistent Memory Region Support
00:07:09.666  ================================
00:07:09.666  Supported:                             No
00:07:09.666  
00:07:09.666  Admin Command Set Attributes
00:07:09.666  ============================
00:07:09.666  Security Send/Receive:                 Not Supported
00:07:09.666  Format NVM:                            Supported
00:07:09.666  Firmware Activate/Download:            Not Supported
00:07:09.666  Namespace Management:                  Supported
00:07:09.666  Device Self-Test:                      Not Supported
00:07:09.666  Directives:                            Supported
00:07:09.666  NVMe-MI:                               Not Supported
00:07:09.666  Virtualization Management:             Not Supported
00:07:09.666  Doorbell Buffer Config:                Supported
00:07:09.666  Get LBA Status Capability:             Not Supported
00:07:09.666  Command & Feature Lockdown Capability: Not Supported
00:07:09.666  Abort Command Limit:                   4
00:07:09.666  Async Event Request Limit:             4
00:07:09.666  Number of Firmware Slots:              N/A
00:07:09.666  Firmware Slot 1 Read-Only:             N/A
00:07:09.666  Firmware Activation Without Reset:     N/A
00:07:09.666  Multiple Update Detection Support:     N/A
00:07:09.666  Firmware Update Granularity:           No Information Provided
00:07:09.666  Per-Namespace SMART Log:               Yes
00:07:09.666  Asymmetric Namespace Access Log Page:  Not Supported
00:07:09.666  Subsystem NQN:                         nqn.2019-08.org.qemu:12342
00:07:09.666  Command Effects Log Page:              Supported
00:07:09.666  Get Log Page Extended Data:            Supported
00:07:09.666  Telemetry Log Pages:                   Not Supported
00:07:09.666  Persistent Event Log Pages:            Not Supported
00:07:09.666  Supported Log Pages Log Page:          May Support
00:07:09.666  Commands Supported & Effects Log Page: Not Supported
00:07:09.666  Feature Identifiers & Effects Log Page:May Support
00:07:09.666  NVMe-MI Commands & Effects Log Page:   May Support
00:07:09.667  Data Area 4 for Telemetry Log:         Not Supported
00:07:09.667  Error Log Page Entries Supported:      1
00:07:09.667  Keep Alive:                            Not Supported
00:07:09.667  
00:07:09.667  NVM Command Set Attributes
00:07:09.667  ==========================
00:07:09.667  Submission Queue Entry Size
00:07:09.667    Max:                       64
00:07:09.667    Min:                       64
00:07:09.667  Completion Queue Entry Size
00:07:09.667    Max:                       16
00:07:09.667    Min:                       16
00:07:09.667  Number of Namespaces:        256
00:07:09.667  Compare Command:             Supported
00:07:09.667  Write Uncorrectable Command: Not Supported
00:07:09.667  Dataset Management Command:  Supported
00:07:09.667  Write Zeroes Command:        Supported
00:07:09.667  Set Features Save Field:     Supported
00:07:09.667  Reservations:                Not Supported
00:07:09.667  Timestamp:                   Supported
00:07:09.667  Copy:                        Supported
00:07:09.667  Volatile Write Cache:        Present
00:07:09.667  Atomic Write Unit (Normal):  1
00:07:09.667  Atomic Write Unit (PFail):   1
00:07:09.667  Atomic Compare & Write Unit: 1
00:07:09.667  Fused Compare & Write:       Not Supported
00:07:09.667  Scatter-Gather List
00:07:09.667    SGL Command Set:           Supported
00:07:09.667    SGL Keyed:                 Not Supported
00:07:09.667    SGL Bit Bucket Descriptor: Not Supported
00:07:09.667    SGL Metadata Pointer:      Not Supported
00:07:09.667    Oversized SGL:             Not Supported
00:07:09.667    SGL Metadata Address:      Not Supported
00:07:09.667    SGL Offset:                Not Supported
00:07:09.667    Transport SGL Data Block:  Not Supported
00:07:09.667  Replay Protected Memory Block:  Not Supported
00:07:09.667  
00:07:09.667  Firmware Slot Information
00:07:09.667  =========================
00:07:09.667  Active slot:                 1
00:07:09.667  Slot 1 Firmware Revision:    1.0
00:07:09.667  
00:07:09.667  
00:07:09.667  Commands Supported and Effects
00:07:09.667  ==============================
00:07:09.667  Admin Commands
00:07:09.667  --------------
00:07:09.667     Delete I/O Submission Queue (00h): Supported 
00:07:09.667     Create I/O Submission Queue (01h): Supported 
00:07:09.667                    Get Log Page (02h): Supported 
00:07:09.667     Delete I/O Completion Queue (04h): Supported 
00:07:09.667     Create I/O Completion Queue (05h): Supported 
00:07:09.667                        Identify (06h): Supported 
00:07:09.667                           Abort (08h): Supported 
00:07:09.667                    Set Features (09h): Supported 
00:07:09.667                    Get Features (0Ah): Supported 
00:07:09.667      Asynchronous Event Request (0Ch): Supported 
00:07:09.667            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:09.667                  Directive Send (19h): Supported 
00:07:09.667               Directive Receive (1Ah): Supported 
00:07:09.667       Virtualization Management (1Ch): Supported 
00:07:09.667          Doorbell Buffer Config (7Ch): Supported 
00:07:09.667                      Format NVM (80h): Supported LBA-Change 
00:07:09.667  I/O Commands
00:07:09.667  ------------
00:07:09.667                           Flush (00h): Supported LBA-Change 
00:07:09.667                           Write (01h): Supported LBA-Change 
00:07:09.667                            Read (02h): Supported 
00:07:09.667                         Compare (05h): Supported 
00:07:09.667                    Write Zeroes (08h): Supported LBA-Change 
00:07:09.667              Dataset Management (09h): Supported LBA-Change 
00:07:09.667                         Unknown (0Ch): Supported 
00:07:09.667                         Unknown (12h): Supported 
00:07:09.667                            Copy (19h): Supported LBA-Change 
00:07:09.667                         Unknown (1Dh): Supported LBA-Change 
00:07:09.667  
00:07:09.667  Error Log
00:07:09.667  =========
00:07:09.667  
00:07:09.667  Arbitration
00:07:09.667  ===========
00:07:09.667  Arbitration Burst:           no limit
00:07:09.667  
00:07:09.667  Power Management
00:07:09.667  ================
00:07:09.667  Number of Power States:          1
00:07:09.667  Current Power State:             Power State #0
00:07:09.667  Power State #0:
00:07:09.667    Max Power:                     25.00 W
00:07:09.667    Non-Operational State:         Operational
00:07:09.667    Entry Latency:                 16 microseconds
00:07:09.667    Exit Latency:                  4 microseconds
00:07:09.667    Relative Read Throughput:      0
00:07:09.667    Relative Read Latency:         0
00:07:09.667    Relative Write Throughput:     0
00:07:09.667    Relative Write Latency:        0
00:07:09.667    Idle Power:                     Not Reported
00:07:09.667    Active Power:                   Not Reported
00:07:09.667  Non-Operational Permissive Mode: Not Supported
00:07:09.667  
00:07:09.667  Health Information
00:07:09.667  ==================
00:07:09.667  Critical Warnings:
00:07:09.667    Available Spare Space:     OK
00:07:09.667    Temperature:               OK
00:07:09.667    Device Reliability:        OK
00:07:09.667    Read Only:                 No
00:07:09.667    Volatile Memory Backup:    OK
00:07:09.667  Current Temperature:         323 Kelvin (50 Celsius)
00:07:09.667  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:09.667  Available Spare:             0%
00:07:09.667  Available Spare Threshold:   0%
00:07:09.667  Life Percentage Used:        0%
00:07:09.667  Data Units Read:             2048
00:07:09.667  Data Units Written:          1836
00:07:09.667  Host Read Commands:          105321
00:07:09.667  Host Write Commands:         103591
00:07:09.667  Controller Busy Time:        0 minutes
00:07:09.667  Power Cycles:                0
00:07:09.667  Power On Hours:              0 hours
00:07:09.667  Unsafe Shutdowns:            0
00:07:09.667  Unrecoverable Media Errors:  0
00:07:09.667  Lifetime Error Log Entries:  0
00:07:09.667  Warning Temperature Time:    0 minutes
00:07:09.667  Critical Temperature Time:   0 minutes
00:07:09.667  
00:07:09.667  Number of Queues
00:07:09.667  ================
00:07:09.667  Number of I/O Submission Queues:      64
00:07:09.667  Number of I/O Completion Queues:      64
00:07:09.667  
00:07:09.667  ZNS Specific Controller Data
00:07:09.667  ============================
00:07:09.667  Zone Append Size Limit:      0
00:07:09.667  
00:07:09.667  
00:07:09.667  Active Namespaces
00:07:09.667  =================
00:07:09.667  Namespace ID:1
00:07:09.667  Error Recovery Timeout:                Unlimited
00:07:09.667  Command Set Identifier:                NVM (00h)
00:07:09.667  Deallocate:                            Supported
00:07:09.667  Deallocated/Unwritten Error:           Supported
00:07:09.667  Deallocated Read Value:                All 0x00
00:07:09.667  Deallocate in Write Zeroes:            Not Supported
00:07:09.667  Deallocated Guard Field:               0xFFFF
00:07:09.667  Flush:                                 Supported
00:07:09.667  Reservation:                           Not Supported
00:07:09.667  Namespace Sharing Capabilities:        Private
00:07:09.667  Size (in LBAs):                        1048576 (4GiB)
00:07:09.667  Capacity (in LBAs):                    1048576 (4GiB)
00:07:09.667  Utilization (in LBAs):                 1048576 (4GiB)
00:07:09.667  Thin Provisioning:                     Not Supported
00:07:09.667  Per-NS Atomic Units:                   No
00:07:09.667  Maximum Single Source Range Length:    128
00:07:09.667  Maximum Copy Length:                   128
00:07:09.667  Maximum Source Range Count:            128
00:07:09.667  NGUID/EUI64 Never Reused:              No
00:07:09.667  Namespace Write Protected:             No
00:07:09.667  Number of LBA Formats:                 8
00:07:09.667  Current LBA Format:                    LBA Format #04
00:07:09.667  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.667  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.667  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.667  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.667  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.667  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.667  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.667  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.667  
00:07:09.667  NVM Specific Namespace Data
00:07:09.667  ===========================
00:07:09.667  Logical Block Storage Tag Mask:               0
00:07:09.667  Protection Information Capabilities:
00:07:09.667    16b Guard Protection Information Storage Tag Support:  No
00:07:09.667    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.667    Storage Tag Check Read Support:                        No
00:07:09.667  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.667  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.667  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Namespace ID:2
00:07:09.668  Error Recovery Timeout:                Unlimited
00:07:09.668  Command Set Identifier:                NVM (00h)
00:07:09.668  Deallocate:                            Supported
00:07:09.668  Deallocated/Unwritten Error:           Supported
00:07:09.668  Deallocated Read Value:                All 0x00
00:07:09.668  Deallocate in Write Zeroes:            Not Supported
00:07:09.668  Deallocated Guard Field:               0xFFFF
00:07:09.668  Flush:                                 Supported
00:07:09.668  Reservation:                           Not Supported
00:07:09.668  Namespace Sharing Capabilities:        Private
00:07:09.668  Size (in LBAs):                        1048576 (4GiB)
00:07:09.668  Capacity (in LBAs):                    1048576 (4GiB)
00:07:09.668  Utilization (in LBAs):                 1048576 (4GiB)
00:07:09.668  Thin Provisioning:                     Not Supported
00:07:09.668  Per-NS Atomic Units:                   No
00:07:09.668  Maximum Single Source Range Length:    128
00:07:09.668  Maximum Copy Length:                   128
00:07:09.668  Maximum Source Range Count:            128
00:07:09.668  NGUID/EUI64 Never Reused:              No
00:07:09.668  Namespace Write Protected:             No
00:07:09.668  Number of LBA Formats:                 8
00:07:09.668  Current LBA Format:                    LBA Format #04
00:07:09.668  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.668  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.668  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.668  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.668  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.668  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.668  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.668  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.668  
00:07:09.668  NVM Specific Namespace Data
00:07:09.668  ===========================
00:07:09.668  Logical Block Storage Tag Mask:               0
00:07:09.668  Protection Information Capabilities:
00:07:09.668    16b Guard Protection Information Storage Tag Support:  No
00:07:09.668    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.668    Storage Tag Check Read Support:                        No
00:07:09.668  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Namespace ID:3
00:07:09.668  Error Recovery Timeout:                Unlimited
00:07:09.668  Command Set Identifier:                NVM (00h)
00:07:09.668  Deallocate:                            Supported
00:07:09.668  Deallocated/Unwritten Error:           Supported
00:07:09.668  Deallocated Read Value:                All 0x00
00:07:09.668  Deallocate in Write Zeroes:            Not Supported
00:07:09.668  Deallocated Guard Field:               0xFFFF
00:07:09.668  Flush:                                 Supported
00:07:09.668  Reservation:                           Not Supported
00:07:09.668  Namespace Sharing Capabilities:        Private
00:07:09.668  Size (in LBAs):                        1048576 (4GiB)
00:07:09.668  Capacity (in LBAs):                    1048576 (4GiB)
00:07:09.668  Utilization (in LBAs):                 1048576 (4GiB)
00:07:09.668  Thin Provisioning:                     Not Supported
00:07:09.668  Per-NS Atomic Units:                   No
00:07:09.668  Maximum Single Source Range Length:    128
00:07:09.668  Maximum Copy Length:                   128
00:07:09.668  Maximum Source Range Count:            128
00:07:09.668  NGUID/EUI64 Never Reused:              No
00:07:09.668  Namespace Write Protected:             No
00:07:09.668  Number of LBA Formats:                 8
00:07:09.668  Current LBA Format:                    LBA Format #04
00:07:09.668  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.668  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.668  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.668  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.668  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.668  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.668  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.668  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.668  
00:07:09.668  NVM Specific Namespace Data
00:07:09.668  ===========================
00:07:09.668  Logical Block Storage Tag Mask:               0
00:07:09.668  Protection Information Capabilities:
00:07:09.668    16b Guard Protection Information Storage Tag Support:  No
00:07:09.668    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.668    Storage Tag Check Read Support:                        No
00:07:09.668  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.668   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:07:09.668   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:07:09.927  =====================================================
00:07:09.927  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:09.927  =====================================================
00:07:09.927  Controller Capabilities/Features
00:07:09.927  ================================
00:07:09.927  Vendor ID:                             1b36
00:07:09.927  Subsystem Vendor ID:                   1af4
00:07:09.928  Serial Number:                         12340
00:07:09.928  Model Number:                          QEMU NVMe Ctrl
00:07:09.928  Firmware Version:                      8.0.0
00:07:09.928  Recommended Arb Burst:                 6
00:07:09.928  IEEE OUI Identifier:                   00 54 52
00:07:09.928  Multi-path I/O
00:07:09.928    May have multiple subsystem ports:   No
00:07:09.928    May have multiple controllers:       No
00:07:09.928    Associated with SR-IOV VF:           No
00:07:09.928  Max Data Transfer Size:                524288
00:07:09.928  Max Number of Namespaces:              256
00:07:09.928  Max Number of I/O Queues:              64
00:07:09.928  NVMe Specification Version (VS):       1.4
00:07:09.928  NVMe Specification Version (Identify): 1.4
00:07:09.928  Maximum Queue Entries:                 2048
00:07:09.928  Contiguous Queues Required:            Yes
00:07:09.928  Arbitration Mechanisms Supported
00:07:09.928    Weighted Round Robin:                Not Supported
00:07:09.928    Vendor Specific:                     Not Supported
00:07:09.928  Reset Timeout:                         7500 ms
00:07:09.928  Doorbell Stride:                       4 bytes
00:07:09.928  NVM Subsystem Reset:                   Not Supported
00:07:09.928  Command Sets Supported
00:07:09.928    NVM Command Set:                     Supported
00:07:09.928  Boot Partition:                        Not Supported
00:07:09.928  Memory Page Size Minimum:              4096 bytes
00:07:09.928  Memory Page Size Maximum:              65536 bytes
00:07:09.928  Persistent Memory Region:              Not Supported
00:07:09.928  Optional Asynchronous Events Supported
00:07:09.928    Namespace Attribute Notices:         Supported
00:07:09.928    Firmware Activation Notices:         Not Supported
00:07:09.928    ANA Change Notices:                  Not Supported
00:07:09.928    PLE Aggregate Log Change Notices:    Not Supported
00:07:09.928    LBA Status Info Alert Notices:       Not Supported
00:07:09.928    EGE Aggregate Log Change Notices:    Not Supported
00:07:09.928    Normal NVM Subsystem Shutdown event: Not Supported
00:07:09.928    Zone Descriptor Change Notices:      Not Supported
00:07:09.928    Discovery Log Change Notices:        Not Supported
00:07:09.928  Controller Attributes
00:07:09.928    128-bit Host Identifier:             Not Supported
00:07:09.928    Non-Operational Permissive Mode:     Not Supported
00:07:09.928    NVM Sets:                            Not Supported
00:07:09.928    Read Recovery Levels:                Not Supported
00:07:09.928    Endurance Groups:                    Not Supported
00:07:09.928    Predictable Latency Mode:            Not Supported
00:07:09.928    Traffic Based Keep ALive:            Not Supported
00:07:09.928    Namespace Granularity:               Not Supported
00:07:09.928    SQ Associations:                     Not Supported
00:07:09.928    UUID List:                           Not Supported
00:07:09.928    Multi-Domain Subsystem:              Not Supported
00:07:09.928    Fixed Capacity Management:           Not Supported
00:07:09.928    Variable Capacity Management:        Not Supported
00:07:09.928    Delete Endurance Group:              Not Supported
00:07:09.928    Delete NVM Set:                      Not Supported
00:07:09.928    Extended LBA Formats Supported:      Supported
00:07:09.928    Flexible Data Placement Supported:   Not Supported
00:07:09.928  
00:07:09.928  Controller Memory Buffer Support
00:07:09.928  ================================
00:07:09.928  Supported:                             No
00:07:09.928  
00:07:09.928  Persistent Memory Region Support
00:07:09.928  ================================
00:07:09.928  Supported:                             No
00:07:09.928  
00:07:09.928  Admin Command Set Attributes
00:07:09.928  ============================
00:07:09.928  Security Send/Receive:                 Not Supported
00:07:09.928  Format NVM:                            Supported
00:07:09.928  Firmware Activate/Download:            Not Supported
00:07:09.928  Namespace Management:                  Supported
00:07:09.928  Device Self-Test:                      Not Supported
00:07:09.928  Directives:                            Supported
00:07:09.928  NVMe-MI:                               Not Supported
00:07:09.928  Virtualization Management:             Not Supported
00:07:09.928  Doorbell Buffer Config:                Supported
00:07:09.928  Get LBA Status Capability:             Not Supported
00:07:09.928  Command & Feature Lockdown Capability: Not Supported
00:07:09.928  Abort Command Limit:                   4
00:07:09.928  Async Event Request Limit:             4
00:07:09.928  Number of Firmware Slots:              N/A
00:07:09.928  Firmware Slot 1 Read-Only:             N/A
00:07:09.928  Firmware Activation Without Reset:     N/A
00:07:09.928  Multiple Update Detection Support:     N/A
00:07:09.928  Firmware Update Granularity:           No Information Provided
00:07:09.928  Per-Namespace SMART Log:               Yes
00:07:09.928  Asymmetric Namespace Access Log Page:  Not Supported
00:07:09.928  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:07:09.928  Command Effects Log Page:              Supported
00:07:09.928  Get Log Page Extended Data:            Supported
00:07:09.928  Telemetry Log Pages:                   Not Supported
00:07:09.928  Persistent Event Log Pages:            Not Supported
00:07:09.928  Supported Log Pages Log Page:          May Support
00:07:09.928  Commands Supported & Effects Log Page: Not Supported
00:07:09.928  Feature Identifiers & Effects Log Page:May Support
00:07:09.928  NVMe-MI Commands & Effects Log Page:   May Support
00:07:09.928  Data Area 4 for Telemetry Log:         Not Supported
00:07:09.928  Error Log Page Entries Supported:      1
00:07:09.928  Keep Alive:                            Not Supported
00:07:09.928  
00:07:09.928  NVM Command Set Attributes
00:07:09.928  ==========================
00:07:09.928  Submission Queue Entry Size
00:07:09.928    Max:                       64
00:07:09.928    Min:                       64
00:07:09.928  Completion Queue Entry Size
00:07:09.928    Max:                       16
00:07:09.928    Min:                       16
00:07:09.928  Number of Namespaces:        256
00:07:09.928  Compare Command:             Supported
00:07:09.928  Write Uncorrectable Command: Not Supported
00:07:09.928  Dataset Management Command:  Supported
00:07:09.928  Write Zeroes Command:        Supported
00:07:09.928  Set Features Save Field:     Supported
00:07:09.928  Reservations:                Not Supported
00:07:09.928  Timestamp:                   Supported
00:07:09.928  Copy:                        Supported
00:07:09.928  Volatile Write Cache:        Present
00:07:09.928  Atomic Write Unit (Normal):  1
00:07:09.928  Atomic Write Unit (PFail):   1
00:07:09.928  Atomic Compare & Write Unit: 1
00:07:09.928  Fused Compare & Write:       Not Supported
00:07:09.928  Scatter-Gather List
00:07:09.928    SGL Command Set:           Supported
00:07:09.928    SGL Keyed:                 Not Supported
00:07:09.928    SGL Bit Bucket Descriptor: Not Supported
00:07:09.928    SGL Metadata Pointer:      Not Supported
00:07:09.928    Oversized SGL:             Not Supported
00:07:09.928    SGL Metadata Address:      Not Supported
00:07:09.928    SGL Offset:                Not Supported
00:07:09.928    Transport SGL Data Block:  Not Supported
00:07:09.928  Replay Protected Memory Block:  Not Supported
00:07:09.928  
00:07:09.928  Firmware Slot Information
00:07:09.928  =========================
00:07:09.928  Active slot:                 1
00:07:09.928  Slot 1 Firmware Revision:    1.0
00:07:09.928  
00:07:09.928  
00:07:09.928  Commands Supported and Effects
00:07:09.928  ==============================
00:07:09.928  Admin Commands
00:07:09.928  --------------
00:07:09.928     Delete I/O Submission Queue (00h): Supported 
00:07:09.928     Create I/O Submission Queue (01h): Supported 
00:07:09.928                    Get Log Page (02h): Supported 
00:07:09.928     Delete I/O Completion Queue (04h): Supported 
00:07:09.928     Create I/O Completion Queue (05h): Supported 
00:07:09.928                        Identify (06h): Supported 
00:07:09.928                           Abort (08h): Supported 
00:07:09.928                    Set Features (09h): Supported 
00:07:09.928                    Get Features (0Ah): Supported 
00:07:09.928      Asynchronous Event Request (0Ch): Supported 
00:07:09.928            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:09.928                  Directive Send (19h): Supported 
00:07:09.928               Directive Receive (1Ah): Supported 
00:07:09.928       Virtualization Management (1Ch): Supported 
00:07:09.928          Doorbell Buffer Config (7Ch): Supported 
00:07:09.928                      Format NVM (80h): Supported LBA-Change 
00:07:09.928  I/O Commands
00:07:09.928  ------------
00:07:09.928                           Flush (00h): Supported LBA-Change 
00:07:09.928                           Write (01h): Supported LBA-Change 
00:07:09.928                            Read (02h): Supported 
00:07:09.929                         Compare (05h): Supported 
00:07:09.929                    Write Zeroes (08h): Supported LBA-Change 
00:07:09.929              Dataset Management (09h): Supported LBA-Change 
00:07:09.929                         Unknown (0Ch): Supported 
00:07:09.929                         Unknown (12h): Supported 
00:07:09.929                            Copy (19h): Supported LBA-Change 
00:07:09.929                         Unknown (1Dh): Supported LBA-Change 
00:07:09.929  
00:07:09.929  Error Log
00:07:09.929  =========
00:07:09.929  
00:07:09.929  Arbitration
00:07:09.929  ===========
00:07:09.929  Arbitration Burst:           no limit
00:07:09.929  
00:07:09.929  Power Management
00:07:09.929  ================
00:07:09.929  Number of Power States:          1
00:07:09.929  Current Power State:             Power State #0
00:07:09.929  Power State #0:
00:07:09.929    Max Power:                     25.00 W
00:07:09.929    Non-Operational State:         Operational
00:07:09.929    Entry Latency:                 16 microseconds
00:07:09.929    Exit Latency:                  4 microseconds
00:07:09.929    Relative Read Throughput:      0
00:07:09.929    Relative Read Latency:         0
00:07:09.929    Relative Write Throughput:     0
00:07:09.929    Relative Write Latency:        0
00:07:09.929    Idle Power:                     Not Reported
00:07:09.929    Active Power:                   Not Reported
00:07:09.929  Non-Operational Permissive Mode: Not Supported
00:07:09.929  
00:07:09.929  Health Information
00:07:09.929  ==================
00:07:09.929  Critical Warnings:
00:07:09.929    Available Spare Space:     OK
00:07:09.929    Temperature:               OK
00:07:09.929    Device Reliability:        OK
00:07:09.929    Read Only:                 No
00:07:09.929    Volatile Memory Backup:    OK
00:07:09.929  Current Temperature:         323 Kelvin (50 Celsius)
00:07:09.929  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:09.929  Available Spare:             0%
00:07:09.929  Available Spare Threshold:   0%
00:07:09.929  Life Percentage Used:        0%
00:07:09.929  Data Units Read:             638
00:07:09.929  Data Units Written:          566
00:07:09.929  Host Read Commands:          34406
00:07:09.929  Host Write Commands:         34192
00:07:09.929  Controller Busy Time:        0 minutes
00:07:09.929  Power Cycles:                0
00:07:09.929  Power On Hours:              0 hours
00:07:09.929  Unsafe Shutdowns:            0
00:07:09.929  Unrecoverable Media Errors:  0
00:07:09.929  Lifetime Error Log Entries:  0
00:07:09.929  Warning Temperature Time:    0 minutes
00:07:09.929  Critical Temperature Time:   0 minutes
00:07:09.929  
00:07:09.929  Number of Queues
00:07:09.929  ================
00:07:09.929  Number of I/O Submission Queues:      64
00:07:09.929  Number of I/O Completion Queues:      64
00:07:09.929  
00:07:09.929  ZNS Specific Controller Data
00:07:09.929  ============================
00:07:09.929  Zone Append Size Limit:      0
00:07:09.929  
00:07:09.929  
00:07:09.929  Active Namespaces
00:07:09.929  =================
00:07:09.929  Namespace ID:1
00:07:09.929  Error Recovery Timeout:                Unlimited
00:07:09.929  Command Set Identifier:                NVM (00h)
00:07:09.929  Deallocate:                            Supported
00:07:09.929  Deallocated/Unwritten Error:           Supported
00:07:09.929  Deallocated Read Value:                All 0x00
00:07:09.929  Deallocate in Write Zeroes:            Not Supported
00:07:09.929  Deallocated Guard Field:               0xFFFF
00:07:09.929  Flush:                                 Supported
00:07:09.929  Reservation:                           Not Supported
00:07:09.929  Metadata Transferred as:               Separate Metadata Buffer
00:07:09.929  Namespace Sharing Capabilities:        Private
00:07:09.929  Size (in LBAs):                        1548666 (5GiB)
00:07:09.929  Capacity (in LBAs):                    1548666 (5GiB)
00:07:09.929  Utilization (in LBAs):                 1548666 (5GiB)
00:07:09.929  Thin Provisioning:                     Not Supported
00:07:09.929  Per-NS Atomic Units:                   No
00:07:09.929  Maximum Single Source Range Length:    128
00:07:09.929  Maximum Copy Length:                   128
00:07:09.929  Maximum Source Range Count:            128
00:07:09.929  NGUID/EUI64 Never Reused:              No
00:07:09.929  Namespace Write Protected:             No
00:07:09.929  Number of LBA Formats:                 8
00:07:09.929  Current LBA Format:                    LBA Format #07
00:07:09.929  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:09.929  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:09.929  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:09.929  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:09.929  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:09.929  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:09.929  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:09.929  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:09.929  
00:07:09.929  NVM Specific Namespace Data
00:07:09.929  ===========================
00:07:09.929  Logical Block Storage Tag Mask:               0
00:07:09.929  Protection Information Capabilities:
00:07:09.929    16b Guard Protection Information Storage Tag Support:  No
00:07:09.929    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:09.929    Storage Tag Check Read Support:                        No
00:07:09.929  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:09.929   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:07:09.929   16:56:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0
00:07:10.189  =====================================================
00:07:10.189  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:10.189  =====================================================
00:07:10.189  Controller Capabilities/Features
00:07:10.189  ================================
00:07:10.189  Vendor ID:                             1b36
00:07:10.190  Subsystem Vendor ID:                   1af4
00:07:10.190  Serial Number:                         12341
00:07:10.190  Model Number:                          QEMU NVMe Ctrl
00:07:10.190  Firmware Version:                      8.0.0
00:07:10.190  Recommended Arb Burst:                 6
00:07:10.190  IEEE OUI Identifier:                   00 54 52
00:07:10.190  Multi-path I/O
00:07:10.190    May have multiple subsystem ports:   No
00:07:10.190    May have multiple controllers:       No
00:07:10.190    Associated with SR-IOV VF:           No
00:07:10.190  Max Data Transfer Size:                524288
00:07:10.190  Max Number of Namespaces:              256
00:07:10.190  Max Number of I/O Queues:              64
00:07:10.190  NVMe Specification Version (VS):       1.4
00:07:10.190  NVMe Specification Version (Identify): 1.4
00:07:10.190  Maximum Queue Entries:                 2048
00:07:10.190  Contiguous Queues Required:            Yes
00:07:10.190  Arbitration Mechanisms Supported
00:07:10.190    Weighted Round Robin:                Not Supported
00:07:10.190    Vendor Specific:                     Not Supported
00:07:10.190  Reset Timeout:                         7500 ms
00:07:10.190  Doorbell Stride:                       4 bytes
00:07:10.190  NVM Subsystem Reset:                   Not Supported
00:07:10.190  Command Sets Supported
00:07:10.190    NVM Command Set:                     Supported
00:07:10.190  Boot Partition:                        Not Supported
00:07:10.190  Memory Page Size Minimum:              4096 bytes
00:07:10.190  Memory Page Size Maximum:              65536 bytes
00:07:10.190  Persistent Memory Region:              Not Supported
00:07:10.190  Optional Asynchronous Events Supported
00:07:10.190    Namespace Attribute Notices:         Supported
00:07:10.190    Firmware Activation Notices:         Not Supported
00:07:10.190    ANA Change Notices:                  Not Supported
00:07:10.190    PLE Aggregate Log Change Notices:    Not Supported
00:07:10.190    LBA Status Info Alert Notices:       Not Supported
00:07:10.190    EGE Aggregate Log Change Notices:    Not Supported
00:07:10.190    Normal NVM Subsystem Shutdown event: Not Supported
00:07:10.190    Zone Descriptor Change Notices:      Not Supported
00:07:10.190    Discovery Log Change Notices:        Not Supported
00:07:10.190  Controller Attributes
00:07:10.190    128-bit Host Identifier:             Not Supported
00:07:10.190    Non-Operational Permissive Mode:     Not Supported
00:07:10.190    NVM Sets:                            Not Supported
00:07:10.190    Read Recovery Levels:                Not Supported
00:07:10.190    Endurance Groups:                    Not Supported
00:07:10.190    Predictable Latency Mode:            Not Supported
00:07:10.190    Traffic Based Keep ALive:            Not Supported
00:07:10.190    Namespace Granularity:               Not Supported
00:07:10.190    SQ Associations:                     Not Supported
00:07:10.190    UUID List:                           Not Supported
00:07:10.190    Multi-Domain Subsystem:              Not Supported
00:07:10.190    Fixed Capacity Management:           Not Supported
00:07:10.190    Variable Capacity Management:        Not Supported
00:07:10.190    Delete Endurance Group:              Not Supported
00:07:10.190    Delete NVM Set:                      Not Supported
00:07:10.190    Extended LBA Formats Supported:      Supported
00:07:10.190    Flexible Data Placement Supported:   Not Supported
00:07:10.190  
00:07:10.190  Controller Memory Buffer Support
00:07:10.190  ================================
00:07:10.190  Supported:                             No
00:07:10.190  
00:07:10.190  Persistent Memory Region Support
00:07:10.190  ================================
00:07:10.190  Supported:                             No
00:07:10.190  
00:07:10.190  Admin Command Set Attributes
00:07:10.190  ============================
00:07:10.190  Security Send/Receive:                 Not Supported
00:07:10.190  Format NVM:                            Supported
00:07:10.190  Firmware Activate/Download:            Not Supported
00:07:10.190  Namespace Management:                  Supported
00:07:10.190  Device Self-Test:                      Not Supported
00:07:10.190  Directives:                            Supported
00:07:10.190  NVMe-MI:                               Not Supported
00:07:10.190  Virtualization Management:             Not Supported
00:07:10.190  Doorbell Buffer Config:                Supported
00:07:10.190  Get LBA Status Capability:             Not Supported
00:07:10.190  Command & Feature Lockdown Capability: Not Supported
00:07:10.190  Abort Command Limit:                   4
00:07:10.190  Async Event Request Limit:             4
00:07:10.190  Number of Firmware Slots:              N/A
00:07:10.190  Firmware Slot 1 Read-Only:             N/A
00:07:10.190  Firmware Activation Without Reset:     N/A
00:07:10.190  Multiple Update Detection Support:     N/A
00:07:10.190  Firmware Update Granularity:           No Information Provided
00:07:10.190  Per-Namespace SMART Log:               Yes
00:07:10.190  Asymmetric Namespace Access Log Page:  Not Supported
00:07:10.190  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:07:10.190  Command Effects Log Page:              Supported
00:07:10.190  Get Log Page Extended Data:            Supported
00:07:10.190  Telemetry Log Pages:                   Not Supported
00:07:10.190  Persistent Event Log Pages:            Not Supported
00:07:10.190  Supported Log Pages Log Page:          May Support
00:07:10.190  Commands Supported & Effects Log Page: Not Supported
00:07:10.190  Feature Identifiers & Effects Log Page:May Support
00:07:10.190  NVMe-MI Commands & Effects Log Page:   May Support
00:07:10.190  Data Area 4 for Telemetry Log:         Not Supported
00:07:10.190  Error Log Page Entries Supported:      1
00:07:10.190  Keep Alive:                            Not Supported
00:07:10.190  
00:07:10.190  NVM Command Set Attributes
00:07:10.190  ==========================
00:07:10.190  Submission Queue Entry Size
00:07:10.190    Max:                       64
00:07:10.190    Min:                       64
00:07:10.190  Completion Queue Entry Size
00:07:10.190    Max:                       16
00:07:10.190    Min:                       16
00:07:10.190  Number of Namespaces:        256
00:07:10.190  Compare Command:             Supported
00:07:10.190  Write Uncorrectable Command: Not Supported
00:07:10.190  Dataset Management Command:  Supported
00:07:10.190  Write Zeroes Command:        Supported
00:07:10.190  Set Features Save Field:     Supported
00:07:10.190  Reservations:                Not Supported
00:07:10.190  Timestamp:                   Supported
00:07:10.190  Copy:                        Supported
00:07:10.190  Volatile Write Cache:        Present
00:07:10.190  Atomic Write Unit (Normal):  1
00:07:10.190  Atomic Write Unit (PFail):   1
00:07:10.190  Atomic Compare & Write Unit: 1
00:07:10.190  Fused Compare & Write:       Not Supported
00:07:10.190  Scatter-Gather List
00:07:10.190    SGL Command Set:           Supported
00:07:10.190    SGL Keyed:                 Not Supported
00:07:10.190    SGL Bit Bucket Descriptor: Not Supported
00:07:10.190    SGL Metadata Pointer:      Not Supported
00:07:10.190    Oversized SGL:             Not Supported
00:07:10.190    SGL Metadata Address:      Not Supported
00:07:10.190    SGL Offset:                Not Supported
00:07:10.190    Transport SGL Data Block:  Not Supported
00:07:10.190  Replay Protected Memory Block:  Not Supported
00:07:10.190  
00:07:10.190  Firmware Slot Information
00:07:10.190  =========================
00:07:10.190  Active slot:                 1
00:07:10.190  Slot 1 Firmware Revision:    1.0
00:07:10.190  
00:07:10.190  
00:07:10.190  Commands Supported and Effects
00:07:10.190  ==============================
00:07:10.190  Admin Commands
00:07:10.190  --------------
00:07:10.190     Delete I/O Submission Queue (00h): Supported 
00:07:10.190     Create I/O Submission Queue (01h): Supported 
00:07:10.190                    Get Log Page (02h): Supported 
00:07:10.190     Delete I/O Completion Queue (04h): Supported 
00:07:10.190     Create I/O Completion Queue (05h): Supported 
00:07:10.190                        Identify (06h): Supported 
00:07:10.190                           Abort (08h): Supported 
00:07:10.190                    Set Features (09h): Supported 
00:07:10.190                    Get Features (0Ah): Supported 
00:07:10.190      Asynchronous Event Request (0Ch): Supported 
00:07:10.190            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:10.190                  Directive Send (19h): Supported 
00:07:10.190               Directive Receive (1Ah): Supported 
00:07:10.190       Virtualization Management (1Ch): Supported 
00:07:10.190          Doorbell Buffer Config (7Ch): Supported 
00:07:10.190                      Format NVM (80h): Supported LBA-Change 
00:07:10.190  I/O Commands
00:07:10.190  ------------
00:07:10.190                           Flush (00h): Supported LBA-Change 
00:07:10.190                           Write (01h): Supported LBA-Change 
00:07:10.190                            Read (02h): Supported 
00:07:10.191                         Compare (05h): Supported 
00:07:10.191                    Write Zeroes (08h): Supported LBA-Change 
00:07:10.191              Dataset Management (09h): Supported LBA-Change 
00:07:10.191                         Unknown (0Ch): Supported 
00:07:10.191                         Unknown (12h): Supported 
00:07:10.191                            Copy (19h): Supported LBA-Change 
00:07:10.191                         Unknown (1Dh): Supported LBA-Change 
00:07:10.191  
00:07:10.191  Error Log
00:07:10.191  =========
00:07:10.191  
00:07:10.191  Arbitration
00:07:10.191  ===========
00:07:10.191  Arbitration Burst:           no limit
00:07:10.191  
00:07:10.191  Power Management
00:07:10.191  ================
00:07:10.191  Number of Power States:          1
00:07:10.191  Current Power State:             Power State #0
00:07:10.191  Power State #0:
00:07:10.191    Max Power:                     25.00 W
00:07:10.191    Non-Operational State:         Operational
00:07:10.191    Entry Latency:                 16 microseconds
00:07:10.191    Exit Latency:                  4 microseconds
00:07:10.191    Relative Read Throughput:      0
00:07:10.191    Relative Read Latency:         0
00:07:10.191    Relative Write Throughput:     0
00:07:10.191    Relative Write Latency:        0
00:07:10.191    Idle Power:                     Not Reported
00:07:10.191    Active Power:                   Not Reported
00:07:10.191  Non-Operational Permissive Mode: Not Supported
00:07:10.191  
00:07:10.191  Health Information
00:07:10.191  ==================
00:07:10.191  Critical Warnings:
00:07:10.191    Available Spare Space:     OK
00:07:10.191    Temperature:               OK
00:07:10.191    Device Reliability:        OK
00:07:10.191    Read Only:                 No
00:07:10.191    Volatile Memory Backup:    OK
00:07:10.191  Current Temperature:         323 Kelvin (50 Celsius)
00:07:10.191  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:10.191  Available Spare:             0%
00:07:10.191  Available Spare Threshold:   0%
00:07:10.191  Life Percentage Used:        0%
00:07:10.191  Data Units Read:             954
00:07:10.191  Data Units Written:          821
00:07:10.191  Host Read Commands:          52212
00:07:10.191  Host Write Commands:         51001
00:07:10.191  Controller Busy Time:        0 minutes
00:07:10.191  Power Cycles:                0
00:07:10.191  Power On Hours:              0 hours
00:07:10.191  Unsafe Shutdowns:            0
00:07:10.191  Unrecoverable Media Errors:  0
00:07:10.191  Lifetime Error Log Entries:  0
00:07:10.191  Warning Temperature Time:    0 minutes
00:07:10.191  Critical Temperature Time:   0 minutes
00:07:10.191  
00:07:10.191  Number of Queues
00:07:10.191  ================
00:07:10.191  Number of I/O Submission Queues:      64
00:07:10.191  Number of I/O Completion Queues:      64
00:07:10.191  
00:07:10.191  ZNS Specific Controller Data
00:07:10.191  ============================
00:07:10.191  Zone Append Size Limit:      0
00:07:10.191  
00:07:10.191  
00:07:10.191  Active Namespaces
00:07:10.191  =================
00:07:10.191  Namespace ID:1
00:07:10.191  Error Recovery Timeout:                Unlimited
00:07:10.191  Command Set Identifier:                NVM (00h)
00:07:10.191  Deallocate:                            Supported
00:07:10.191  Deallocated/Unwritten Error:           Supported
00:07:10.191  Deallocated Read Value:                All 0x00
00:07:10.191  Deallocate in Write Zeroes:            Not Supported
00:07:10.191  Deallocated Guard Field:               0xFFFF
00:07:10.191  Flush:                                 Supported
00:07:10.191  Reservation:                           Not Supported
00:07:10.191  Namespace Sharing Capabilities:        Private
00:07:10.191  Size (in LBAs):                        1310720 (5GiB)
00:07:10.191  Capacity (in LBAs):                    1310720 (5GiB)
00:07:10.191  Utilization (in LBAs):                 1310720 (5GiB)
00:07:10.191  Thin Provisioning:                     Not Supported
00:07:10.191  Per-NS Atomic Units:                   No
00:07:10.191  Maximum Single Source Range Length:    128
00:07:10.191  Maximum Copy Length:                   128
00:07:10.191  Maximum Source Range Count:            128
00:07:10.191  NGUID/EUI64 Never Reused:              No
00:07:10.191  Namespace Write Protected:             No
00:07:10.191  Number of LBA Formats:                 8
00:07:10.191  Current LBA Format:                    LBA Format #04
00:07:10.191  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:10.191  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:10.191  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:10.191  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:10.191  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:10.191  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:10.191  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:10.191  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:10.191  
00:07:10.191  NVM Specific Namespace Data
00:07:10.191  ===========================
00:07:10.191  Logical Block Storage Tag Mask:               0
00:07:10.191  Protection Information Capabilities:
00:07:10.191    16b Guard Protection Information Storage Tag Support:  No
00:07:10.191    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:10.191    Storage Tag Check Read Support:                        No
00:07:10.191  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.191   16:56:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:07:10.191   16:56:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0
00:07:10.191  =====================================================
00:07:10.191  NVMe Controller at 0000:00:12.0 [1b36:0010]
00:07:10.191  =====================================================
00:07:10.191  Controller Capabilities/Features
00:07:10.191  ================================
00:07:10.191  Vendor ID:                             1b36
00:07:10.191  Subsystem Vendor ID:                   1af4
00:07:10.191  Serial Number:                         12342
00:07:10.191  Model Number:                          QEMU NVMe Ctrl
00:07:10.191  Firmware Version:                      8.0.0
00:07:10.191  Recommended Arb Burst:                 6
00:07:10.191  IEEE OUI Identifier:                   00 54 52
00:07:10.191  Multi-path I/O
00:07:10.191    May have multiple subsystem ports:   No
00:07:10.191    May have multiple controllers:       No
00:07:10.191    Associated with SR-IOV VF:           No
00:07:10.191  Max Data Transfer Size:                524288
00:07:10.191  Max Number of Namespaces:              256
00:07:10.191  Max Number of I/O Queues:              64
00:07:10.191  NVMe Specification Version (VS):       1.4
00:07:10.191  NVMe Specification Version (Identify): 1.4
00:07:10.191  Maximum Queue Entries:                 2048
00:07:10.191  Contiguous Queues Required:            Yes
00:07:10.191  Arbitration Mechanisms Supported
00:07:10.191    Weighted Round Robin:                Not Supported
00:07:10.191    Vendor Specific:                     Not Supported
00:07:10.191  Reset Timeout:                         7500 ms
00:07:10.191  Doorbell Stride:                       4 bytes
00:07:10.191  NVM Subsystem Reset:                   Not Supported
00:07:10.191  Command Sets Supported
00:07:10.191    NVM Command Set:                     Supported
00:07:10.191  Boot Partition:                        Not Supported
00:07:10.191  Memory Page Size Minimum:              4096 bytes
00:07:10.191  Memory Page Size Maximum:              65536 bytes
00:07:10.191  Persistent Memory Region:              Not Supported
00:07:10.191  Optional Asynchronous Events Supported
00:07:10.191    Namespace Attribute Notices:         Supported
00:07:10.191    Firmware Activation Notices:         Not Supported
00:07:10.191    ANA Change Notices:                  Not Supported
00:07:10.191    PLE Aggregate Log Change Notices:    Not Supported
00:07:10.191    LBA Status Info Alert Notices:       Not Supported
00:07:10.191    EGE Aggregate Log Change Notices:    Not Supported
00:07:10.191    Normal NVM Subsystem Shutdown event: Not Supported
00:07:10.191    Zone Descriptor Change Notices:      Not Supported
00:07:10.191    Discovery Log Change Notices:        Not Supported
00:07:10.191  Controller Attributes
00:07:10.191    128-bit Host Identifier:             Not Supported
00:07:10.191    Non-Operational Permissive Mode:     Not Supported
00:07:10.191    NVM Sets:                            Not Supported
00:07:10.192    Read Recovery Levels:                Not Supported
00:07:10.192    Endurance Groups:                    Not Supported
00:07:10.192    Predictable Latency Mode:            Not Supported
00:07:10.192    Traffic Based Keep ALive:            Not Supported
00:07:10.192    Namespace Granularity:               Not Supported
00:07:10.192    SQ Associations:                     Not Supported
00:07:10.192    UUID List:                           Not Supported
00:07:10.192    Multi-Domain Subsystem:              Not Supported
00:07:10.192    Fixed Capacity Management:           Not Supported
00:07:10.192    Variable Capacity Management:        Not Supported
00:07:10.192    Delete Endurance Group:              Not Supported
00:07:10.192    Delete NVM Set:                      Not Supported
00:07:10.192    Extended LBA Formats Supported:      Supported
00:07:10.192    Flexible Data Placement Supported:   Not Supported
00:07:10.192  
00:07:10.192  Controller Memory Buffer Support
00:07:10.192  ================================
00:07:10.192  Supported:                             No
00:07:10.192  
00:07:10.192  Persistent Memory Region Support
00:07:10.192  ================================
00:07:10.192  Supported:                             No
00:07:10.192  
00:07:10.192  Admin Command Set Attributes
00:07:10.192  ============================
00:07:10.192  Security Send/Receive:                 Not Supported
00:07:10.192  Format NVM:                            Supported
00:07:10.192  Firmware Activate/Download:            Not Supported
00:07:10.192  Namespace Management:                  Supported
00:07:10.192  Device Self-Test:                      Not Supported
00:07:10.192  Directives:                            Supported
00:07:10.192  NVMe-MI:                               Not Supported
00:07:10.192  Virtualization Management:             Not Supported
00:07:10.192  Doorbell Buffer Config:                Supported
00:07:10.192  Get LBA Status Capability:             Not Supported
00:07:10.192  Command & Feature Lockdown Capability: Not Supported
00:07:10.192  Abort Command Limit:                   4
00:07:10.192  Async Event Request Limit:             4
00:07:10.192  Number of Firmware Slots:              N/A
00:07:10.192  Firmware Slot 1 Read-Only:             N/A
00:07:10.192  Firmware Activation Without Reset:     N/A
00:07:10.192  Multiple Update Detection Support:     N/A
00:07:10.192  Firmware Update Granularity:           No Information Provided
00:07:10.192  Per-Namespace SMART Log:               Yes
00:07:10.192  Asymmetric Namespace Access Log Page:  Not Supported
00:07:10.192  Subsystem NQN:                         nqn.2019-08.org.qemu:12342
00:07:10.192  Command Effects Log Page:              Supported
00:07:10.192  Get Log Page Extended Data:            Supported
00:07:10.192  Telemetry Log Pages:                   Not Supported
00:07:10.192  Persistent Event Log Pages:            Not Supported
00:07:10.192  Supported Log Pages Log Page:          May Support
00:07:10.192  Commands Supported & Effects Log Page: Not Supported
00:07:10.192  Feature Identifiers & Effects Log Page:May Support
00:07:10.192  NVMe-MI Commands & Effects Log Page:   May Support
00:07:10.192  Data Area 4 for Telemetry Log:         Not Supported
00:07:10.192  Error Log Page Entries Supported:      1
00:07:10.192  Keep Alive:                            Not Supported
00:07:10.192  
00:07:10.192  NVM Command Set Attributes
00:07:10.192  ==========================
00:07:10.192  Submission Queue Entry Size
00:07:10.192    Max:                       64
00:07:10.192    Min:                       64
00:07:10.192  Completion Queue Entry Size
00:07:10.192    Max:                       16
00:07:10.192    Min:                       16
00:07:10.192  Number of Namespaces:        256
00:07:10.192  Compare Command:             Supported
00:07:10.192  Write Uncorrectable Command: Not Supported
00:07:10.192  Dataset Management Command:  Supported
00:07:10.192  Write Zeroes Command:        Supported
00:07:10.192  Set Features Save Field:     Supported
00:07:10.192  Reservations:                Not Supported
00:07:10.192  Timestamp:                   Supported
00:07:10.192  Copy:                        Supported
00:07:10.192  Volatile Write Cache:        Present
00:07:10.192  Atomic Write Unit (Normal):  1
00:07:10.192  Atomic Write Unit (PFail):   1
00:07:10.192  Atomic Compare & Write Unit: 1
00:07:10.192  Fused Compare & Write:       Not Supported
00:07:10.192  Scatter-Gather List
00:07:10.192    SGL Command Set:           Supported
00:07:10.192    SGL Keyed:                 Not Supported
00:07:10.192    SGL Bit Bucket Descriptor: Not Supported
00:07:10.192    SGL Metadata Pointer:      Not Supported
00:07:10.192    Oversized SGL:             Not Supported
00:07:10.192    SGL Metadata Address:      Not Supported
00:07:10.192    SGL Offset:                Not Supported
00:07:10.192    Transport SGL Data Block:  Not Supported
00:07:10.192  Replay Protected Memory Block:  Not Supported
00:07:10.192  
00:07:10.192  Firmware Slot Information
00:07:10.192  =========================
00:07:10.192  Active slot:                 1
00:07:10.192  Slot 1 Firmware Revision:    1.0
00:07:10.192  
00:07:10.192  
00:07:10.192  Commands Supported and Effects
00:07:10.192  ==============================
00:07:10.192  Admin Commands
00:07:10.192  --------------
00:07:10.192     Delete I/O Submission Queue (00h): Supported 
00:07:10.192     Create I/O Submission Queue (01h): Supported 
00:07:10.192                    Get Log Page (02h): Supported 
00:07:10.192     Delete I/O Completion Queue (04h): Supported 
00:07:10.192     Create I/O Completion Queue (05h): Supported 
00:07:10.192                        Identify (06h): Supported 
00:07:10.192                           Abort (08h): Supported 
00:07:10.192                    Set Features (09h): Supported 
00:07:10.192                    Get Features (0Ah): Supported 
00:07:10.192      Asynchronous Event Request (0Ch): Supported 
00:07:10.192            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:10.192                  Directive Send (19h): Supported 
00:07:10.192               Directive Receive (1Ah): Supported 
00:07:10.192       Virtualization Management (1Ch): Supported 
00:07:10.192          Doorbell Buffer Config (7Ch): Supported 
00:07:10.192                      Format NVM (80h): Supported LBA-Change 
00:07:10.192  I/O Commands
00:07:10.192  ------------
00:07:10.192                           Flush (00h): Supported LBA-Change 
00:07:10.192                           Write (01h): Supported LBA-Change 
00:07:10.192                            Read (02h): Supported 
00:07:10.192                         Compare (05h): Supported 
00:07:10.192                    Write Zeroes (08h): Supported LBA-Change 
00:07:10.192              Dataset Management (09h): Supported LBA-Change 
00:07:10.192                         Unknown (0Ch): Supported 
00:07:10.192                         Unknown (12h): Supported 
00:07:10.192                            Copy (19h): Supported LBA-Change 
00:07:10.192                         Unknown (1Dh): Supported LBA-Change 
00:07:10.192  
00:07:10.192  Error Log
00:07:10.192  =========
00:07:10.192  
00:07:10.192  Arbitration
00:07:10.192  ===========
00:07:10.192  Arbitration Burst:           no limit
00:07:10.192  
00:07:10.192  Power Management
00:07:10.192  ================
00:07:10.192  Number of Power States:          1
00:07:10.192  Current Power State:             Power State #0
00:07:10.192  Power State #0:
00:07:10.192    Max Power:                     25.00 W
00:07:10.192    Non-Operational State:         Operational
00:07:10.192    Entry Latency:                 16 microseconds
00:07:10.192    Exit Latency:                  4 microseconds
00:07:10.192    Relative Read Throughput:      0
00:07:10.192    Relative Read Latency:         0
00:07:10.192    Relative Write Throughput:     0
00:07:10.192    Relative Write Latency:        0
00:07:10.192    Idle Power:                     Not Reported
00:07:10.192    Active Power:                   Not Reported
00:07:10.192  Non-Operational Permissive Mode: Not Supported
00:07:10.192  
00:07:10.192  Health Information
00:07:10.192  ==================
00:07:10.192  Critical Warnings:
00:07:10.192    Available Spare Space:     OK
00:07:10.192    Temperature:               OK
00:07:10.192    Device Reliability:        OK
00:07:10.192    Read Only:                 No
00:07:10.192    Volatile Memory Backup:    OK
00:07:10.192  Current Temperature:         323 Kelvin (50 Celsius)
00:07:10.192  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:10.192  Available Spare:             0%
00:07:10.192  Available Spare Threshold:   0%
00:07:10.192  Life Percentage Used:        0%
00:07:10.192  Data Units Read:             2048
00:07:10.192  Data Units Written:          1836
00:07:10.192  Host Read Commands:          105321
00:07:10.192  Host Write Commands:         103591
00:07:10.192  Controller Busy Time:        0 minutes
00:07:10.192  Power Cycles:                0
00:07:10.192  Power On Hours:              0 hours
00:07:10.192  Unsafe Shutdowns:            0
00:07:10.192  Unrecoverable Media Errors:  0
00:07:10.193  Lifetime Error Log Entries:  0
00:07:10.193  Warning Temperature Time:    0 minutes
00:07:10.193  Critical Temperature Time:   0 minutes
00:07:10.193  
00:07:10.193  Number of Queues
00:07:10.193  ================
00:07:10.193  Number of I/O Submission Queues:      64
00:07:10.193  Number of I/O Completion Queues:      64
00:07:10.193  
00:07:10.193  ZNS Specific Controller Data
00:07:10.193  ============================
00:07:10.193  Zone Append Size Limit:      0
00:07:10.193  
00:07:10.193  
00:07:10.193  Active Namespaces
00:07:10.193  =================
00:07:10.193  Namespace ID:1
00:07:10.193  Error Recovery Timeout:                Unlimited
00:07:10.193  Command Set Identifier:                NVM (00h)
00:07:10.193  Deallocate:                            Supported
00:07:10.193  Deallocated/Unwritten Error:           Supported
00:07:10.193  Deallocated Read Value:                All 0x00
00:07:10.193  Deallocate in Write Zeroes:            Not Supported
00:07:10.193  Deallocated Guard Field:               0xFFFF
00:07:10.193  Flush:                                 Supported
00:07:10.193  Reservation:                           Not Supported
00:07:10.193  Namespace Sharing Capabilities:        Private
00:07:10.193  Size (in LBAs):                        1048576 (4GiB)
00:07:10.193  Capacity (in LBAs):                    1048576 (4GiB)
00:07:10.193  Utilization (in LBAs):                 1048576 (4GiB)
00:07:10.193  Thin Provisioning:                     Not Supported
00:07:10.193  Per-NS Atomic Units:                   No
00:07:10.193  Maximum Single Source Range Length:    128
00:07:10.193  Maximum Copy Length:                   128
00:07:10.193  Maximum Source Range Count:            128
00:07:10.193  NGUID/EUI64 Never Reused:              No
00:07:10.193  Namespace Write Protected:             No
00:07:10.193  Number of LBA Formats:                 8
00:07:10.193  Current LBA Format:                    LBA Format #04
00:07:10.193  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:10.193  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:10.193  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:10.193  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:10.193  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:10.193  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:10.193  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:10.193  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:10.193  
00:07:10.193  NVM Specific Namespace Data
00:07:10.193  ===========================
00:07:10.193  Logical Block Storage Tag Mask:               0
00:07:10.193  Protection Information Capabilities:
00:07:10.193    16b Guard Protection Information Storage Tag Support:  No
00:07:10.193    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:10.193    Storage Tag Check Read Support:                        No
00:07:10.193  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Namespace ID:2
00:07:10.193  Error Recovery Timeout:                Unlimited
00:07:10.193  Command Set Identifier:                NVM (00h)
00:07:10.193  Deallocate:                            Supported
00:07:10.193  Deallocated/Unwritten Error:           Supported
00:07:10.193  Deallocated Read Value:                All 0x00
00:07:10.193  Deallocate in Write Zeroes:            Not Supported
00:07:10.193  Deallocated Guard Field:               0xFFFF
00:07:10.193  Flush:                                 Supported
00:07:10.193  Reservation:                           Not Supported
00:07:10.193  Namespace Sharing Capabilities:        Private
00:07:10.193  Size (in LBAs):                        1048576 (4GiB)
00:07:10.193  Capacity (in LBAs):                    1048576 (4GiB)
00:07:10.193  Utilization (in LBAs):                 1048576 (4GiB)
00:07:10.193  Thin Provisioning:                     Not Supported
00:07:10.193  Per-NS Atomic Units:                   No
00:07:10.193  Maximum Single Source Range Length:    128
00:07:10.193  Maximum Copy Length:                   128
00:07:10.193  Maximum Source Range Count:            128
00:07:10.193  NGUID/EUI64 Never Reused:              No
00:07:10.193  Namespace Write Protected:             No
00:07:10.193  Number of LBA Formats:                 8
00:07:10.193  Current LBA Format:                    LBA Format #04
00:07:10.193  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:10.193  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:10.193  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:10.193  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:10.193  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:10.193  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:10.193  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:10.193  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:10.193  
00:07:10.193  NVM Specific Namespace Data
00:07:10.193  ===========================
00:07:10.193  Logical Block Storage Tag Mask:               0
00:07:10.193  Protection Information Capabilities:
00:07:10.193    16b Guard Protection Information Storage Tag Support:  No
00:07:10.193    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:10.193    Storage Tag Check Read Support:                        No
00:07:10.193  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.193  Namespace ID:3
00:07:10.193  Error Recovery Timeout:                Unlimited
00:07:10.193  Command Set Identifier:                NVM (00h)
00:07:10.193  Deallocate:                            Supported
00:07:10.193  Deallocated/Unwritten Error:           Supported
00:07:10.193  Deallocated Read Value:                All 0x00
00:07:10.193  Deallocate in Write Zeroes:            Not Supported
00:07:10.193  Deallocated Guard Field:               0xFFFF
00:07:10.193  Flush:                                 Supported
00:07:10.193  Reservation:                           Not Supported
00:07:10.193  Namespace Sharing Capabilities:        Private
00:07:10.193  Size (in LBAs):                        1048576 (4GiB)
00:07:10.193  Capacity (in LBAs):                    1048576 (4GiB)
00:07:10.193  Utilization (in LBAs):                 1048576 (4GiB)
00:07:10.193  Thin Provisioning:                     Not Supported
00:07:10.193  Per-NS Atomic Units:                   No
00:07:10.193  Maximum Single Source Range Length:    128
00:07:10.193  Maximum Copy Length:                   128
00:07:10.193  Maximum Source Range Count:            128
00:07:10.193  NGUID/EUI64 Never Reused:              No
00:07:10.193  Namespace Write Protected:             No
00:07:10.193  Number of LBA Formats:                 8
00:07:10.193  Current LBA Format:                    LBA Format #04
00:07:10.193  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:10.193  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:10.193  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:10.193  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:10.193  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:10.193  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:10.193  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:10.193  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:10.193  
00:07:10.193  NVM Specific Namespace Data
00:07:10.193  ===========================
00:07:10.193  Logical Block Storage Tag Mask:               0
00:07:10.193  Protection Information Capabilities:
00:07:10.193    16b Guard Protection Information Storage Tag Support:  No
00:07:10.193    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:10.452    Storage Tag Check Read Support:                        No
00:07:10.453  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.453   16:56:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:07:10.453   16:56:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0
00:07:10.453  =====================================================
00:07:10.453  NVMe Controller at 0000:00:13.0 [1b36:0010]
00:07:10.453  =====================================================
00:07:10.453  Controller Capabilities/Features
00:07:10.453  ================================
00:07:10.453  Vendor ID:                             1b36
00:07:10.453  Subsystem Vendor ID:                   1af4
00:07:10.453  Serial Number:                         12343
00:07:10.453  Model Number:                          QEMU NVMe Ctrl
00:07:10.453  Firmware Version:                      8.0.0
00:07:10.453  Recommended Arb Burst:                 6
00:07:10.453  IEEE OUI Identifier:                   00 54 52
00:07:10.453  Multi-path I/O
00:07:10.453    May have multiple subsystem ports:   No
00:07:10.453    May have multiple controllers:       Yes
00:07:10.453    Associated with SR-IOV VF:           No
00:07:10.453  Max Data Transfer Size:                524288
00:07:10.453  Max Number of Namespaces:              256
00:07:10.453  Max Number of I/O Queues:              64
00:07:10.453  NVMe Specification Version (VS):       1.4
00:07:10.453  NVMe Specification Version (Identify): 1.4
00:07:10.453  Maximum Queue Entries:                 2048
00:07:10.453  Contiguous Queues Required:            Yes
00:07:10.453  Arbitration Mechanisms Supported
00:07:10.453    Weighted Round Robin:                Not Supported
00:07:10.453    Vendor Specific:                     Not Supported
00:07:10.453  Reset Timeout:                         7500 ms
00:07:10.453  Doorbell Stride:                       4 bytes
00:07:10.453  NVM Subsystem Reset:                   Not Supported
00:07:10.453  Command Sets Supported
00:07:10.453    NVM Command Set:                     Supported
00:07:10.453  Boot Partition:                        Not Supported
00:07:10.453  Memory Page Size Minimum:              4096 bytes
00:07:10.453  Memory Page Size Maximum:              65536 bytes
00:07:10.453  Persistent Memory Region:              Not Supported
00:07:10.453  Optional Asynchronous Events Supported
00:07:10.453    Namespace Attribute Notices:         Supported
00:07:10.453    Firmware Activation Notices:         Not Supported
00:07:10.453    ANA Change Notices:                  Not Supported
00:07:10.453    PLE Aggregate Log Change Notices:    Not Supported
00:07:10.453    LBA Status Info Alert Notices:       Not Supported
00:07:10.453    EGE Aggregate Log Change Notices:    Not Supported
00:07:10.453    Normal NVM Subsystem Shutdown event: Not Supported
00:07:10.453    Zone Descriptor Change Notices:      Not Supported
00:07:10.453    Discovery Log Change Notices:        Not Supported
00:07:10.453  Controller Attributes
00:07:10.453    128-bit Host Identifier:             Not Supported
00:07:10.453    Non-Operational Permissive Mode:     Not Supported
00:07:10.453    NVM Sets:                            Not Supported
00:07:10.453    Read Recovery Levels:                Not Supported
00:07:10.453    Endurance Groups:                    Supported
00:07:10.453    Predictable Latency Mode:            Not Supported
00:07:10.453    Traffic Based Keep ALive:            Not Supported
00:07:10.453    Namespace Granularity:               Not Supported
00:07:10.453    SQ Associations:                     Not Supported
00:07:10.453    UUID List:                           Not Supported
00:07:10.453    Multi-Domain Subsystem:              Not Supported
00:07:10.453    Fixed Capacity Management:           Not Supported
00:07:10.453    Variable Capacity Management:        Not Supported
00:07:10.453    Delete Endurance Group:              Not Supported
00:07:10.453    Delete NVM Set:                      Not Supported
00:07:10.453    Extended LBA Formats Supported:      Supported
00:07:10.453    Flexible Data Placement Supported:   Supported
00:07:10.453  
00:07:10.453  Controller Memory Buffer Support
00:07:10.453  ================================
00:07:10.453  Supported:                             No
00:07:10.453  
00:07:10.453  Persistent Memory Region Support
00:07:10.453  ================================
00:07:10.453  Supported:                             No
00:07:10.453  
00:07:10.453  Admin Command Set Attributes
00:07:10.453  ============================
00:07:10.453  Security Send/Receive:                 Not Supported
00:07:10.453  Format NVM:                            Supported
00:07:10.453  Firmware Activate/Download:            Not Supported
00:07:10.453  Namespace Management:                  Supported
00:07:10.453  Device Self-Test:                      Not Supported
00:07:10.453  Directives:                            Supported
00:07:10.453  NVMe-MI:                               Not Supported
00:07:10.453  Virtualization Management:             Not Supported
00:07:10.453  Doorbell Buffer Config:                Supported
00:07:10.453  Get LBA Status Capability:             Not Supported
00:07:10.453  Command & Feature Lockdown Capability: Not Supported
00:07:10.453  Abort Command Limit:                   4
00:07:10.453  Async Event Request Limit:             4
00:07:10.453  Number of Firmware Slots:              N/A
00:07:10.453  Firmware Slot 1 Read-Only:             N/A
00:07:10.453  Firmware Activation Without Reset:     N/A
00:07:10.453  Multiple Update Detection Support:     N/A
00:07:10.453  Firmware Update Granularity:           No Information Provided
00:07:10.453  Per-Namespace SMART Log:               Yes
00:07:10.453  Asymmetric Namespace Access Log Page:  Not Supported
00:07:10.453  Subsystem NQN:                         nqn.2019-08.org.qemu:fdp-subsys3
00:07:10.453  Command Effects Log Page:              Supported
00:07:10.453  Get Log Page Extended Data:            Supported
00:07:10.453  Telemetry Log Pages:                   Not Supported
00:07:10.453  Persistent Event Log Pages:            Not Supported
00:07:10.453  Supported Log Pages Log Page:          May Support
00:07:10.453  Commands Supported & Effects Log Page: Not Supported
00:07:10.453  Feature Identifiers & Effects Log Page:May Support
00:07:10.453  NVMe-MI Commands & Effects Log Page:   May Support
00:07:10.453  Data Area 4 for Telemetry Log:         Not Supported
00:07:10.453  Error Log Page Entries Supported:      1
00:07:10.453  Keep Alive:                            Not Supported
00:07:10.453  
00:07:10.453  NVM Command Set Attributes
00:07:10.453  ==========================
00:07:10.453  Submission Queue Entry Size
00:07:10.453    Max:                       64
00:07:10.453    Min:                       64
00:07:10.453  Completion Queue Entry Size
00:07:10.453    Max:                       16
00:07:10.453    Min:                       16
00:07:10.453  Number of Namespaces:        256
00:07:10.453  Compare Command:             Supported
00:07:10.453  Write Uncorrectable Command: Not Supported
00:07:10.453  Dataset Management Command:  Supported
00:07:10.453  Write Zeroes Command:        Supported
00:07:10.453  Set Features Save Field:     Supported
00:07:10.453  Reservations:                Not Supported
00:07:10.453  Timestamp:                   Supported
00:07:10.453  Copy:                        Supported
00:07:10.453  Volatile Write Cache:        Present
00:07:10.453  Atomic Write Unit (Normal):  1
00:07:10.453  Atomic Write Unit (PFail):   1
00:07:10.453  Atomic Compare & Write Unit: 1
00:07:10.453  Fused Compare & Write:       Not Supported
00:07:10.453  Scatter-Gather List
00:07:10.453    SGL Command Set:           Supported
00:07:10.453    SGL Keyed:                 Not Supported
00:07:10.453    SGL Bit Bucket Descriptor: Not Supported
00:07:10.453    SGL Metadata Pointer:      Not Supported
00:07:10.453    Oversized SGL:             Not Supported
00:07:10.453    SGL Metadata Address:      Not Supported
00:07:10.453    SGL Offset:                Not Supported
00:07:10.453    Transport SGL Data Block:  Not Supported
00:07:10.453  Replay Protected Memory Block:  Not Supported
00:07:10.453  
00:07:10.453  Firmware Slot Information
00:07:10.453  =========================
00:07:10.453  Active slot:                 1
00:07:10.453  Slot 1 Firmware Revision:    1.0
00:07:10.453  
00:07:10.453  
00:07:10.453  Commands Supported and Effects
00:07:10.454  ==============================
00:07:10.454  Admin Commands
00:07:10.454  --------------
00:07:10.454     Delete I/O Submission Queue (00h): Supported 
00:07:10.454     Create I/O Submission Queue (01h): Supported 
00:07:10.454                    Get Log Page (02h): Supported 
00:07:10.454     Delete I/O Completion Queue (04h): Supported 
00:07:10.454     Create I/O Completion Queue (05h): Supported 
00:07:10.454                        Identify (06h): Supported 
00:07:10.454                           Abort (08h): Supported 
00:07:10.454                    Set Features (09h): Supported 
00:07:10.454                    Get Features (0Ah): Supported 
00:07:10.454      Asynchronous Event Request (0Ch): Supported 
00:07:10.454            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:10.454                  Directive Send (19h): Supported 
00:07:10.454               Directive Receive (1Ah): Supported 
00:07:10.454       Virtualization Management (1Ch): Supported 
00:07:10.454          Doorbell Buffer Config (7Ch): Supported 
00:07:10.454                      Format NVM (80h): Supported LBA-Change 
00:07:10.454  I/O Commands
00:07:10.454  ------------
00:07:10.454                           Flush (00h): Supported LBA-Change 
00:07:10.454                           Write (01h): Supported LBA-Change 
00:07:10.454                            Read (02h): Supported 
00:07:10.454                         Compare (05h): Supported 
00:07:10.454                    Write Zeroes (08h): Supported LBA-Change 
00:07:10.454              Dataset Management (09h): Supported LBA-Change 
00:07:10.454                         Unknown (0Ch): Supported 
00:07:10.454                         Unknown (12h): Supported 
00:07:10.454                            Copy (19h): Supported LBA-Change 
00:07:10.454                         Unknown (1Dh): Supported LBA-Change 
00:07:10.454  
00:07:10.454  Error Log
00:07:10.454  =========
00:07:10.454  
00:07:10.454  Arbitration
00:07:10.454  ===========
00:07:10.454  Arbitration Burst:           no limit
00:07:10.454  
00:07:10.454  Power Management
00:07:10.454  ================
00:07:10.454  Number of Power States:          1
00:07:10.454  Current Power State:             Power State #0
00:07:10.454  Power State #0:
00:07:10.454    Max Power:                     25.00 W
00:07:10.454    Non-Operational State:         Operational
00:07:10.454    Entry Latency:                 16 microseconds
00:07:10.454    Exit Latency:                  4 microseconds
00:07:10.454    Relative Read Throughput:      0
00:07:10.454    Relative Read Latency:         0
00:07:10.454    Relative Write Throughput:     0
00:07:10.454    Relative Write Latency:        0
00:07:10.454    Idle Power:                     Not Reported
00:07:10.454    Active Power:                   Not Reported
00:07:10.454  Non-Operational Permissive Mode: Not Supported
00:07:10.454  
00:07:10.454  Health Information
00:07:10.454  ==================
00:07:10.454  Critical Warnings:
00:07:10.454    Available Spare Space:     OK
00:07:10.454    Temperature:               OK
00:07:10.454    Device Reliability:        OK
00:07:10.454    Read Only:                 No
00:07:10.454    Volatile Memory Backup:    OK
00:07:10.454  Current Temperature:         323 Kelvin (50 Celsius)
00:07:10.454  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:10.454  Available Spare:             0%
00:07:10.454  Available Spare Threshold:   0%
00:07:10.454  Life Percentage Used:        0%
00:07:10.454  Data Units Read:             774
00:07:10.454  Data Units Written:          703
00:07:10.454  Host Read Commands:          35976
00:07:10.454  Host Write Commands:         35399
00:07:10.454  Controller Busy Time:        0 minutes
00:07:10.454  Power Cycles:                0
00:07:10.454  Power On Hours:              0 hours
00:07:10.454  Unsafe Shutdowns:            0
00:07:10.454  Unrecoverable Media Errors:  0
00:07:10.454  Lifetime Error Log Entries:  0
00:07:10.454  Warning Temperature Time:    0 minutes
00:07:10.454  Critical Temperature Time:   0 minutes
00:07:10.454  
00:07:10.454  Number of Queues
00:07:10.454  ================
00:07:10.454  Number of I/O Submission Queues:      64
00:07:10.454  Number of I/O Completion Queues:      64
00:07:10.454  
00:07:10.454  ZNS Specific Controller Data
00:07:10.454  ============================
00:07:10.454  Zone Append Size Limit:      0
00:07:10.454  
00:07:10.454  
00:07:10.454  Active Namespaces
00:07:10.454  =================
00:07:10.454  Namespace ID:1
00:07:10.454  Error Recovery Timeout:                Unlimited
00:07:10.454  Command Set Identifier:                NVM (00h)
00:07:10.454  Deallocate:                            Supported
00:07:10.454  Deallocated/Unwritten Error:           Supported
00:07:10.454  Deallocated Read Value:                All 0x00
00:07:10.454  Deallocate in Write Zeroes:            Not Supported
00:07:10.454  Deallocated Guard Field:               0xFFFF
00:07:10.454  Flush:                                 Supported
00:07:10.454  Reservation:                           Not Supported
00:07:10.454  Namespace Sharing Capabilities:        Multiple Controllers
00:07:10.454  Size (in LBAs):                        262144 (1GiB)
00:07:10.454  Capacity (in LBAs):                    262144 (1GiB)
00:07:10.454  Utilization (in LBAs):                 262144 (1GiB)
00:07:10.454  Thin Provisioning:                     Not Supported
00:07:10.454  Per-NS Atomic Units:                   No
00:07:10.454  Maximum Single Source Range Length:    128
00:07:10.454  Maximum Copy Length:                   128
00:07:10.454  Maximum Source Range Count:            128
00:07:10.454  NGUID/EUI64 Never Reused:              No
00:07:10.454  Namespace Write Protected:             No
00:07:10.454  Endurance group ID:                    1
00:07:10.454  Number of LBA Formats:                 8
00:07:10.454  Current LBA Format:                    LBA Format #04
00:07:10.454  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:10.454  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:10.454  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:10.454  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:10.454  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:10.454  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:10.454  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:10.454  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:10.454  
00:07:10.454  Get Feature FDP:
00:07:10.454  ================
00:07:10.454    Enabled:                 Yes
00:07:10.454    FDP configuration index: 0
00:07:10.454  
00:07:10.454  FDP configurations log page
00:07:10.454  ===========================
00:07:10.454  Number of FDP configurations:         1
00:07:10.454  Version:                              0
00:07:10.454  Size:                                 112
00:07:10.454  FDP Configuration Descriptor:         0
00:07:10.454    Descriptor Size:                    96
00:07:10.454    Reclaim Group Identifier format:    2
00:07:10.454    FDP Volatile Write Cache:           Not Present
00:07:10.454    FDP Configuration:                  Valid
00:07:10.454    Vendor Specific Size:               0
00:07:10.454    Number of Reclaim Groups:           2
00:07:10.454    Number of Recalim Unit Handles:     8
00:07:10.454    Max Placement Identifiers:          128
00:07:10.454    Number of Namespaces Suppprted:     256
00:07:10.454    Reclaim unit Nominal Size:          6000000 bytes
00:07:10.454    Estimated Reclaim Unit Time Limit:  Not Reported
00:07:10.454      RUH Desc #000:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #001:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #002:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #003:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #004:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #005:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #006:          RUH Type: Initially Isolated
00:07:10.454      RUH Desc #007:          RUH Type: Initially Isolated
00:07:10.454  
00:07:10.454  FDP reclaim unit handle usage log page
00:07:10.454  ======================================
00:07:10.454  Number of Reclaim Unit Handles:       8
00:07:10.454    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:07:10.454    RUH Usage Desc #001:   RUH Attributes: Unused
00:07:10.454    RUH Usage Desc #002:   RUH Attributes: Unused
00:07:10.454    RUH Usage Desc #003:   RUH Attributes: Unused
00:07:10.454    RUH Usage Desc #004:   RUH Attributes: Unused
00:07:10.454    RUH Usage Desc #005:   RUH Attributes: Unused
00:07:10.454    RUH Usage Desc #006:   RUH Attributes: Unused
00:07:10.454    RUH Usage Desc #007:   RUH Attributes: Unused
00:07:10.454  
00:07:10.454  FDP statistics log page
00:07:10.454  =======================
00:07:10.454  Host bytes with metadata written:  439463936
00:07:10.454  Media bytes with metadata written: 439500800
00:07:10.454  Media bytes erased:                0
00:07:10.454  
00:07:10.454  FDP events log page
00:07:10.455  ===================
00:07:10.455  Number of FDP events:              0
00:07:10.455  
00:07:10.455  NVM Specific Namespace Data
00:07:10.455  ===========================
00:07:10.455  Logical Block Storage Tag Mask:               0
00:07:10.455  Protection Information Capabilities:
00:07:10.455    16b Guard Protection Information Storage Tag Support:  No
00:07:10.455    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:10.455    Storage Tag Check Read Support:                        No
00:07:10.455  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:10.455  
00:07:10.455  real	0m1.153s
00:07:10.455  user	0m0.457s
00:07:10.455  sys	0m0.481s
00:07:10.455   16:56:33 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:10.455   16:56:33 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:07:10.455  ************************************
00:07:10.455  END TEST nvme_identify
00:07:10.455  ************************************
00:07:10.455   16:56:33 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:07:10.455   16:56:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:10.455   16:56:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:10.455   16:56:33 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:10.713  ************************************
00:07:10.713  START TEST nvme_perf
00:07:10.713  ************************************
00:07:10.713   16:56:33 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:07:10.713   16:56:33 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:07:12.090  Initializing NVMe Controllers
00:07:12.090  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:12.090  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:12.090  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:07:12.090  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:07:12.090  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:12.090  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:07:12.090  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:07:12.090  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:07:12.090  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:07:12.090  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:07:12.090  Initialization complete. Launching workers.
00:07:12.090  ========================================================
00:07:12.090                                                                             Latency(us)
00:07:12.090  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:12.090  PCIE (0000:00:10.0) NSID 1 from core  0:   18721.84     219.40    6845.98    4937.06   33015.73
00:07:12.090  PCIE (0000:00:11.0) NSID 1 from core  0:   18721.84     219.40    6837.54    4986.10   31265.74
00:07:12.090  PCIE (0000:00:13.0) NSID 1 from core  0:   18721.84     219.40    6827.18    4997.02   29920.39
00:07:12.090  PCIE (0000:00:12.0) NSID 1 from core  0:   18721.84     219.40    6816.58    4989.74   28137.79
00:07:12.090  PCIE (0000:00:12.0) NSID 2 from core  0:   18721.84     219.40    6806.00    4981.37   26337.35
00:07:12.090  PCIE (0000:00:12.0) NSID 3 from core  0:   18785.74     220.15    6772.31    5012.97   21274.86
00:07:12.090  ========================================================
00:07:12.090  Total                                  :  112394.93    1317.13    6817.57    4937.06   33015.73
00:07:12.090  
00:07:12.090  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:12.090  =================================================================================
00:07:12.090    1.00000% :  5142.055us
00:07:12.090   10.00000% :  5646.178us
00:07:12.090   25.00000% :  5923.446us
00:07:12.090   50.00000% :  6326.745us
00:07:12.090   75.00000% :  6956.898us
00:07:12.090   90.00000% :  8822.154us
00:07:12.090   95.00000% :  9679.163us
00:07:12.090   98.00000% : 10687.409us
00:07:12.090   99.00000% : 11191.532us
00:07:12.090   99.50000% : 27827.594us
00:07:12.090   99.90000% : 32667.175us
00:07:12.090   99.99000% : 33070.474us
00:07:12.090   99.99900% : 33070.474us
00:07:12.090   99.99990% : 33070.474us
00:07:12.090   99.99999% : 33070.474us
00:07:12.090  
00:07:12.090  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:12.090  =================================================================================
00:07:12.090    1.00000% :  5192.468us
00:07:12.090   10.00000% :  5671.385us
00:07:12.090   25.00000% :  5948.652us
00:07:12.090   50.00000% :  6301.538us
00:07:12.090   75.00000% :  6906.486us
00:07:12.090   90.00000% :  8822.154us
00:07:12.090   95.00000% :  9679.163us
00:07:12.090   98.00000% : 10737.822us
00:07:12.090   99.00000% : 11191.532us
00:07:12.090   99.50000% : 26012.751us
00:07:12.090   99.90000% : 31053.982us
00:07:12.090   99.99000% : 31255.631us
00:07:12.091   99.99900% : 31457.280us
00:07:12.091   99.99990% : 31457.280us
00:07:12.091   99.99999% : 31457.280us
00:07:12.091  
00:07:12.091  Summary latency data for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:07:12.091  =================================================================================
00:07:12.091    1.00000% :  5192.468us
00:07:12.091   10.00000% :  5671.385us
00:07:12.091   25.00000% :  5948.652us
00:07:12.091   50.00000% :  6276.332us
00:07:12.091   75.00000% :  6805.662us
00:07:12.091   90.00000% :  8822.154us
00:07:12.091   95.00000% :  9830.400us
00:07:12.091   98.00000% : 10737.822us
00:07:12.091   99.00000% : 11241.945us
00:07:12.091   99.50000% : 24601.206us
00:07:12.091   99.90000% : 29642.437us
00:07:12.091   99.99000% : 30045.735us
00:07:12.091   99.99900% : 30045.735us
00:07:12.091   99.99990% : 30045.735us
00:07:12.091   99.99999% : 30045.735us
00:07:12.091  
00:07:12.091  Summary latency data for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:07:12.091  =================================================================================
00:07:12.091    1.00000% :  5217.674us
00:07:12.091   10.00000% :  5696.591us
00:07:12.091   25.00000% :  5948.652us
00:07:12.091   50.00000% :  6276.332us
00:07:12.091   75.00000% :  6805.662us
00:07:12.091   90.00000% :  8822.154us
00:07:12.091   95.00000% :  9880.812us
00:07:12.091   98.00000% : 10636.997us
00:07:12.091   99.00000% : 11342.769us
00:07:12.091   99.50000% : 22887.188us
00:07:12.091   99.90000% : 27827.594us
00:07:12.091   99.99000% : 28230.892us
00:07:12.091   99.99900% : 28230.892us
00:07:12.091   99.99990% : 28230.892us
00:07:12.091   99.99999% : 28230.892us
00:07:12.091  
00:07:12.091  Summary latency data for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:07:12.091  =================================================================================
00:07:12.091    1.00000% :  5192.468us
00:07:12.091   10.00000% :  5671.385us
00:07:12.091   25.00000% :  5948.652us
00:07:12.091   50.00000% :  6301.538us
00:07:12.091   75.00000% :  6856.074us
00:07:12.091   90.00000% :  8822.154us
00:07:12.091   95.00000% :  9729.575us
00:07:12.091   98.00000% : 10636.997us
00:07:12.091   99.00000% : 11393.182us
00:07:12.091   99.50000% : 21273.994us
00:07:12.091   99.90000% : 26012.751us
00:07:12.091   99.99000% : 26416.049us
00:07:12.091   99.99900% : 26416.049us
00:07:12.091   99.99990% : 26416.049us
00:07:12.091   99.99999% : 26416.049us
00:07:12.091  
00:07:12.091  Summary latency data for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:07:12.091  =================================================================================
00:07:12.091    1.00000% :  5192.468us
00:07:12.091   10.00000% :  5671.385us
00:07:12.091   25.00000% :  5948.652us
00:07:12.091   50.00000% :  6301.538us
00:07:12.091   75.00000% :  6906.486us
00:07:12.091   90.00000% :  8822.154us
00:07:12.091   95.00000% :  9679.163us
00:07:12.091   98.00000% : 10687.409us
00:07:12.091   99.00000% : 11191.532us
00:07:12.091   99.50000% : 15930.289us
00:07:12.091   99.90000% : 20870.695us
00:07:12.091   99.99000% : 21273.994us
00:07:12.091   99.99900% : 21374.818us
00:07:12.091   99.99990% : 21374.818us
00:07:12.091   99.99999% : 21374.818us
00:07:12.091  
00:07:12.091  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:12.091  ==============================================================================
00:07:12.091         Range in us     Cumulative    IO count
00:07:12.091   4915.200 -  4940.406:    0.0053%  (        1)
00:07:12.091   4940.406 -  4965.612:    0.0747%  (       13)
00:07:12.091   4965.612 -  4990.818:    0.1227%  (        9)
00:07:12.091   4990.818 -  5016.025:    0.2186%  (       18)
00:07:12.091   5016.025 -  5041.231:    0.3253%  (       20)
00:07:12.091   5041.231 -  5066.437:    0.4533%  (       24)
00:07:12.091   5066.437 -  5091.643:    0.6079%  (       29)
00:07:12.091   5091.643 -  5116.849:    0.8799%  (       51)
00:07:12.091   5116.849 -  5142.055:    1.0292%  (       28)
00:07:12.091   5142.055 -  5167.262:    1.2479%  (       41)
00:07:12.091   5167.262 -  5192.468:    1.4452%  (       37)
00:07:12.091   5192.468 -  5217.674:    1.7278%  (       53)
00:07:12.091   5217.674 -  5242.880:    1.9731%  (       46)
00:07:12.091   5242.880 -  5268.086:    2.2878%  (       59)
00:07:12.091   5268.086 -  5293.292:    2.5384%  (       47)
00:07:12.091   5293.292 -  5318.498:    2.8317%  (       55)
00:07:12.091   5318.498 -  5343.705:    3.1357%  (       57)
00:07:12.091   5343.705 -  5368.911:    3.4930%  (       67)
00:07:12.091   5368.911 -  5394.117:    3.8076%  (       59)
00:07:12.091   5394.117 -  5419.323:    4.1756%  (       69)
00:07:12.091   5419.323 -  5444.529:    4.6502%  (       89)
00:07:12.091   5444.529 -  5469.735:    5.1621%  (       96)
00:07:12.091   5469.735 -  5494.942:    5.7274%  (      106)
00:07:12.091   5494.942 -  5520.148:    6.4100%  (      128)
00:07:12.091   5520.148 -  5545.354:    7.0712%  (      124)
00:07:12.091   5545.354 -  5570.560:    7.8178%  (      140)
00:07:12.091   5570.560 -  5595.766:    8.5218%  (      132)
00:07:12.091   5595.766 -  5620.972:    9.2897%  (      144)
00:07:12.091   5620.972 -  5646.178:   10.2549%  (      181)
00:07:12.091   5646.178 -  5671.385:   11.2255%  (      182)
00:07:12.091   5671.385 -  5696.591:   12.3400%  (      209)
00:07:12.091   5696.591 -  5721.797:   13.7639%  (      267)
00:07:12.091   5721.797 -  5747.003:   15.0757%  (      246)
00:07:12.091   5747.003 -  5772.209:   16.5316%  (      273)
00:07:12.091   5772.209 -  5797.415:   18.0461%  (      284)
00:07:12.091   5797.415 -  5822.622:   19.6299%  (      297)
00:07:12.091   5822.622 -  5847.828:   21.1924%  (      293)
00:07:12.091   5847.828 -  5873.034:   22.7869%  (      299)
00:07:12.091   5873.034 -  5898.240:   24.2907%  (      282)
00:07:12.091   5898.240 -  5923.446:   26.0079%  (      322)
00:07:12.091   5923.446 -  5948.652:   27.6131%  (      301)
00:07:12.091   5948.652 -  5973.858:   29.2076%  (      299)
00:07:12.091   5973.858 -  5999.065:   30.7754%  (      294)
00:07:12.091   5999.065 -  6024.271:   32.4819%  (      320)
00:07:12.091   6024.271 -  6049.477:   34.0444%  (      293)
00:07:12.091   6049.477 -  6074.683:   35.6869%  (      308)
00:07:12.091   6074.683 -  6099.889:   37.3560%  (      313)
00:07:12.091   6099.889 -  6125.095:   38.9238%  (      294)
00:07:12.091   6125.095 -  6150.302:   40.5663%  (      308)
00:07:12.091   6150.302 -  6175.508:   42.1288%  (      293)
00:07:12.091   6175.508 -  6200.714:   43.6967%  (      294)
00:07:12.091   6200.714 -  6225.920:   45.2858%  (      298)
00:07:12.091   6225.920 -  6251.126:   46.8910%  (      301)
00:07:12.091   6251.126 -  6276.332:   48.4428%  (      291)
00:07:12.091   6276.332 -  6301.538:   49.9627%  (      285)
00:07:12.091   6301.538 -  6326.745:   51.4718%  (      283)
00:07:12.091   6326.745 -  6351.951:   53.0290%  (      292)
00:07:12.091   6351.951 -  6377.157:   54.6075%  (      296)
00:07:12.091   6377.157 -  6402.363:   56.0847%  (      277)
00:07:12.091   6402.363 -  6427.569:   57.5459%  (      274)
00:07:12.091   6427.569 -  6452.775:   58.9377%  (      261)
00:07:12.091   6452.775 -  6503.188:   61.7108%  (      520)
00:07:12.091   6503.188 -  6553.600:   64.2438%  (      475)
00:07:12.091   6553.600 -  6604.012:   66.6756%  (      456)
00:07:12.091   6604.012 -  6654.425:   68.8353%  (      405)
00:07:12.091   6654.425 -  6704.837:   70.6645%  (      343)
00:07:12.091   6704.837 -  6755.249:   72.1096%  (      271)
00:07:12.091   6755.249 -  6805.662:   73.2455%  (      213)
00:07:12.091   6805.662 -  6856.074:   74.1734%  (      174)
00:07:12.091   6856.074 -  6906.486:   74.8987%  (      136)
00:07:12.091   6906.486 -  6956.898:   75.5119%  (      115)
00:07:12.091   6956.898 -  7007.311:   76.0772%  (      106)
00:07:12.091   7007.311 -  7057.723:   76.5945%  (       97)
00:07:12.091   7057.723 -  7108.135:   77.0851%  (       92)
00:07:12.091   7108.135 -  7158.548:   77.5651%  (       90)
00:07:12.091   7158.548 -  7208.960:   78.0077%  (       83)
00:07:12.091   7208.960 -  7259.372:   78.4610%  (       85)
00:07:12.091   7259.372 -  7309.785:   78.8983%  (       82)
00:07:12.091   7309.785 -  7360.197:   79.3675%  (       88)
00:07:12.091   7360.197 -  7410.609:   79.8422%  (       89)
00:07:12.091   7410.609 -  7461.022:   80.2528%  (       77)
00:07:12.091   7461.022 -  7511.434:   80.6527%  (       75)
00:07:12.091   7511.434 -  7561.846:   81.1700%  (       97)
00:07:12.091   7561.846 -  7612.258:   81.5753%  (       76)
00:07:12.091   7612.258 -  7662.671:   82.0073%  (       81)
00:07:12.091   7662.671 -  7713.083:   82.5032%  (       93)
00:07:12.091   7713.083 -  7763.495:   82.9245%  (       79)
00:07:12.091   7763.495 -  7813.908:   83.3298%  (       76)
00:07:12.091   7813.908 -  7864.320:   83.7617%  (       81)
00:07:12.091   7864.320 -  7914.732:   84.1564%  (       74)
00:07:12.091   7914.732 -  7965.145:   84.5297%  (       70)
00:07:12.091   7965.145 -  8015.557:   84.8816%  (       66)
00:07:12.091   8015.557 -  8065.969:   85.1856%  (       57)
00:07:12.091   8065.969 -  8116.382:   85.5162%  (       62)
00:07:12.091   8116.382 -  8166.794:   85.8362%  (       60)
00:07:12.091   8166.794 -  8217.206:   86.1828%  (       65)
00:07:12.091   8217.206 -  8267.618:   86.5134%  (       62)
00:07:12.091   8267.618 -  8318.031:   86.8761%  (       68)
00:07:12.091   8318.031 -  8368.443:   87.2120%  (       63)
00:07:12.091   8368.443 -  8418.855:   87.5533%  (       64)
00:07:12.091   8418.855 -  8469.268:   87.9213%  (       69)
00:07:12.091   8469.268 -  8519.680:   88.2466%  (       61)
00:07:12.091   8519.680 -  8570.092:   88.5559%  (       58)
00:07:12.091   8570.092 -  8620.505:   88.9292%  (       70)
00:07:12.091   8620.505 -  8670.917:   89.2491%  (       60)
00:07:12.091   8670.917 -  8721.329:   89.5744%  (       61)
00:07:12.091   8721.329 -  8771.742:   89.8944%  (       60)
00:07:12.091   8771.742 -  8822.154:   90.2197%  (       61)
00:07:12.092   8822.154 -  8872.566:   90.5930%  (       70)
00:07:12.092   8872.566 -  8922.978:   90.9876%  (       74)
00:07:12.092   8922.978 -  8973.391:   91.3556%  (       69)
00:07:12.092   8973.391 -  9023.803:   91.7022%  (       65)
00:07:12.092   9023.803 -  9074.215:   92.0595%  (       67)
00:07:12.092   9074.215 -  9124.628:   92.4008%  (       64)
00:07:12.092   9124.628 -  9175.040:   92.7208%  (       60)
00:07:12.092   9175.040 -  9225.452:   93.0194%  (       56)
00:07:12.092   9225.452 -  9275.865:   93.2701%  (       47)
00:07:12.092   9275.865 -  9326.277:   93.5900%  (       60)
00:07:12.092   9326.277 -  9376.689:   93.8673%  (       52)
00:07:12.092   9376.689 -  9427.102:   94.0913%  (       42)
00:07:12.092   9427.102 -  9477.514:   94.3313%  (       45)
00:07:12.092   9477.514 -  9527.926:   94.5446%  (       40)
00:07:12.092   9527.926 -  9578.338:   94.7366%  (       36)
00:07:12.092   9578.338 -  9628.751:   94.8912%  (       29)
00:07:12.092   9628.751 -  9679.163:   95.0832%  (       36)
00:07:12.092   9679.163 -  9729.575:   95.2378%  (       29)
00:07:12.092   9729.575 -  9779.988:   95.4192%  (       34)
00:07:12.092   9779.988 -  9830.400:   95.6058%  (       35)
00:07:12.092   9830.400 -  9880.812:   95.7765%  (       32)
00:07:12.092   9880.812 -  9931.225:   95.9311%  (       29)
00:07:12.092   9931.225 -  9981.637:   96.0591%  (       24)
00:07:12.092   9981.637 - 10032.049:   96.2084%  (       28)
00:07:12.092  10032.049 - 10082.462:   96.3631%  (       29)
00:07:12.092  10082.462 - 10132.874:   96.4910%  (       24)
00:07:12.092  10132.874 - 10183.286:   96.6190%  (       24)
00:07:12.092  10183.286 - 10233.698:   96.7790%  (       30)
00:07:12.092  10233.698 - 10284.111:   96.9283%  (       28)
00:07:12.092  10284.111 - 10334.523:   97.0776%  (       28)
00:07:12.092  10334.523 - 10384.935:   97.2110%  (       25)
00:07:12.092  10384.935 - 10435.348:   97.3283%  (       22)
00:07:12.092  10435.348 - 10485.760:   97.4776%  (       28)
00:07:12.092  10485.760 - 10536.172:   97.6216%  (       27)
00:07:12.092  10536.172 - 10586.585:   97.7762%  (       29)
00:07:12.092  10586.585 - 10636.997:   97.9149%  (       26)
00:07:12.092  10636.997 - 10687.409:   98.0482%  (       25)
00:07:12.092  10687.409 - 10737.822:   98.1762%  (       24)
00:07:12.092  10737.822 - 10788.234:   98.3148%  (       26)
00:07:12.092  10788.234 - 10838.646:   98.4162%  (       19)
00:07:12.092  10838.646 - 10889.058:   98.5015%  (       16)
00:07:12.092  10889.058 - 10939.471:   98.6028%  (       19)
00:07:12.092  10939.471 - 10989.883:   98.7041%  (       19)
00:07:12.092  10989.883 - 11040.295:   98.7841%  (       15)
00:07:12.092  11040.295 - 11090.708:   98.8748%  (       17)
00:07:12.092  11090.708 - 11141.120:   98.9494%  (       14)
00:07:12.092  11141.120 - 11191.532:   99.0081%  (       11)
00:07:12.092  11191.532 - 11241.945:   99.0508%  (        8)
00:07:12.092  11241.945 - 11292.357:   99.0828%  (        6)
00:07:12.092  11292.357 - 11342.769:   99.1148%  (        6)
00:07:12.092  11342.769 - 11393.182:   99.1468%  (        6)
00:07:12.092  11393.182 - 11443.594:   99.1788%  (        6)
00:07:12.092  11443.594 - 11494.006:   99.2161%  (        7)
00:07:12.092  11494.006 - 11544.418:   99.2267%  (        2)
00:07:12.092  11544.418 - 11594.831:   99.2427%  (        3)
00:07:12.092  11594.831 - 11645.243:   99.2587%  (        3)
00:07:12.092  11645.243 - 11695.655:   99.2747%  (        3)
00:07:12.092  11695.655 - 11746.068:   99.2907%  (        3)
00:07:12.092  11746.068 - 11796.480:   99.3067%  (        3)
00:07:12.092  11796.480 - 11846.892:   99.3174%  (        2)
00:07:12.092  26617.698 - 26819.348:   99.3281%  (        2)
00:07:12.092  26819.348 - 27020.997:   99.3707%  (        8)
00:07:12.092  27020.997 - 27222.646:   99.4134%  (        8)
00:07:12.092  27222.646 - 27424.295:   99.4561%  (        8)
00:07:12.092  27424.295 - 27625.945:   99.4934%  (        7)
00:07:12.092  27625.945 - 27827.594:   99.5360%  (        8)
00:07:12.092  27827.594 - 28029.243:   99.5840%  (        9)
00:07:12.092  28029.243 - 28230.892:   99.6214%  (        7)
00:07:12.092  28230.892 - 28432.542:   99.6587%  (        7)
00:07:12.092  31255.631 - 31457.280:   99.6747%  (        3)
00:07:12.092  31457.280 - 31658.929:   99.7174%  (        8)
00:07:12.092  31658.929 - 31860.578:   99.7600%  (        8)
00:07:12.092  31860.578 - 32062.228:   99.7974%  (        7)
00:07:12.092  32062.228 - 32263.877:   99.8400%  (        8)
00:07:12.092  32263.877 - 32465.526:   99.8827%  (        8)
00:07:12.092  32465.526 - 32667.175:   99.9253%  (        8)
00:07:12.092  32667.175 - 32868.825:   99.9680%  (        8)
00:07:12.092  32868.825 - 33070.474:  100.0000%  (        6)
00:07:12.092  
00:07:12.092  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:12.092  ==============================================================================
00:07:12.092         Range in us     Cumulative    IO count
00:07:12.092   4965.612 -  4990.818:    0.0213%  (        4)
00:07:12.092   4990.818 -  5016.025:    0.0427%  (        4)
00:07:12.092   5016.025 -  5041.231:    0.1067%  (       12)
00:07:12.092   5041.231 -  5066.437:    0.2240%  (       22)
00:07:12.092   5066.437 -  5091.643:    0.3413%  (       22)
00:07:12.092   5091.643 -  5116.849:    0.5066%  (       31)
00:07:12.092   5116.849 -  5142.055:    0.6879%  (       34)
00:07:12.092   5142.055 -  5167.262:    0.8746%  (       35)
00:07:12.092   5167.262 -  5192.468:    1.1145%  (       45)
00:07:12.092   5192.468 -  5217.674:    1.3439%  (       43)
00:07:12.092   5217.674 -  5242.880:    1.6212%  (       52)
00:07:12.092   5242.880 -  5268.086:    1.8718%  (       47)
00:07:12.092   5268.086 -  5293.292:    2.1224%  (       47)
00:07:12.092   5293.292 -  5318.498:    2.4691%  (       65)
00:07:12.092   5318.498 -  5343.705:    2.7784%  (       58)
00:07:12.092   5343.705 -  5368.911:    3.1943%  (       78)
00:07:12.092   5368.911 -  5394.117:    3.5623%  (       69)
00:07:12.092   5394.117 -  5419.323:    3.9569%  (       74)
00:07:12.092   5419.323 -  5444.529:    4.3355%  (       71)
00:07:12.092   5444.529 -  5469.735:    4.7622%  (       80)
00:07:12.092   5469.735 -  5494.942:    5.1834%  (       79)
00:07:12.092   5494.942 -  5520.148:    5.6901%  (       95)
00:07:12.092   5520.148 -  5545.354:    6.2927%  (      113)
00:07:12.092   5545.354 -  5570.560:    6.9646%  (      126)
00:07:12.092   5570.560 -  5595.766:    7.7752%  (      152)
00:07:12.092   5595.766 -  5620.972:    8.5484%  (      145)
00:07:12.092   5620.972 -  5646.178:    9.3377%  (      148)
00:07:12.092   5646.178 -  5671.385:   10.1589%  (      154)
00:07:12.092   5671.385 -  5696.591:   11.0762%  (      172)
00:07:12.092   5696.591 -  5721.797:   12.1160%  (      195)
00:07:12.092   5721.797 -  5747.003:   13.2146%  (      206)
00:07:12.092   5747.003 -  5772.209:   14.4678%  (      235)
00:07:12.092   5772.209 -  5797.415:   15.7903%  (      248)
00:07:12.092   5797.415 -  5822.622:   17.3048%  (      284)
00:07:12.092   5822.622 -  5847.828:   18.9793%  (      314)
00:07:12.092   5847.828 -  5873.034:   20.6431%  (      312)
00:07:12.092   5873.034 -  5898.240:   22.2323%  (      298)
00:07:12.092   5898.240 -  5923.446:   23.9761%  (      327)
00:07:12.092   5923.446 -  5948.652:   25.7253%  (      328)
00:07:12.092   5948.652 -  5973.858:   27.5171%  (      336)
00:07:12.092   5973.858 -  5999.065:   29.3035%  (      335)
00:07:12.092   5999.065 -  6024.271:   31.1967%  (      355)
00:07:12.092   6024.271 -  6049.477:   33.0845%  (      354)
00:07:12.092   6049.477 -  6074.683:   34.9616%  (      352)
00:07:12.092   6074.683 -  6099.889:   36.8227%  (      349)
00:07:12.092   6099.889 -  6125.095:   38.6839%  (      349)
00:07:12.092   6125.095 -  6150.302:   40.5770%  (      355)
00:07:12.092   6150.302 -  6175.508:   42.3795%  (      338)
00:07:12.092   6175.508 -  6200.714:   44.1713%  (      336)
00:07:12.092   6200.714 -  6225.920:   46.0324%  (      349)
00:07:12.092   6225.920 -  6251.126:   47.8242%  (      336)
00:07:12.092   6251.126 -  6276.332:   49.6320%  (      339)
00:07:12.092   6276.332 -  6301.538:   51.3545%  (      323)
00:07:12.092   6301.538 -  6326.745:   53.0663%  (      321)
00:07:12.092   6326.745 -  6351.951:   54.7355%  (      313)
00:07:12.092   6351.951 -  6377.157:   56.3407%  (      301)
00:07:12.092   6377.157 -  6402.363:   57.8925%  (      291)
00:07:12.092   6402.363 -  6427.569:   59.3963%  (      282)
00:07:12.092   6427.569 -  6452.775:   60.9055%  (      283)
00:07:12.092   6452.775 -  6503.188:   63.8439%  (      551)
00:07:12.092   6503.188 -  6553.600:   66.5209%  (      502)
00:07:12.092   6553.600 -  6604.012:   68.8567%  (      438)
00:07:12.092   6604.012 -  6654.425:   70.6911%  (      344)
00:07:12.092   6654.425 -  6704.837:   72.1150%  (      267)
00:07:12.092   6704.837 -  6755.249:   73.2509%  (      213)
00:07:12.092   6755.249 -  6805.662:   74.1201%  (      163)
00:07:12.092   6805.662 -  6856.074:   74.8560%  (      138)
00:07:12.092   6856.074 -  6906.486:   75.4160%  (      105)
00:07:12.092   6906.486 -  6956.898:   75.9172%  (       94)
00:07:12.092   6956.898 -  7007.311:   76.3652%  (       84)
00:07:12.092   7007.311 -  7057.723:   76.8025%  (       82)
00:07:12.092   7057.723 -  7108.135:   77.2558%  (       85)
00:07:12.092   7108.135 -  7158.548:   77.7250%  (       88)
00:07:12.092   7158.548 -  7208.960:   78.1463%  (       79)
00:07:12.092   7208.960 -  7259.372:   78.5943%  (       84)
00:07:12.092   7259.372 -  7309.785:   79.0636%  (       88)
00:07:12.092   7309.785 -  7360.197:   79.5435%  (       90)
00:07:12.092   7360.197 -  7410.609:   79.9755%  (       81)
00:07:12.092   7410.609 -  7461.022:   80.4128%  (       82)
00:07:12.092   7461.022 -  7511.434:   80.8180%  (       76)
00:07:12.093   7511.434 -  7561.846:   81.1807%  (       68)
00:07:12.093   7561.846 -  7612.258:   81.5273%  (       65)
00:07:12.093   7612.258 -  7662.671:   81.8953%  (       69)
00:07:12.093   7662.671 -  7713.083:   82.2899%  (       74)
00:07:12.093   7713.083 -  7763.495:   82.6898%  (       75)
00:07:12.093   7763.495 -  7813.908:   83.0418%  (       66)
00:07:12.093   7813.908 -  7864.320:   83.4204%  (       71)
00:07:12.093   7864.320 -  7914.732:   83.8204%  (       75)
00:07:12.093   7914.732 -  7965.145:   84.2257%  (       76)
00:07:12.093   7965.145 -  8015.557:   84.5936%  (       69)
00:07:12.093   8015.557 -  8065.969:   85.0043%  (       77)
00:07:12.093   8065.969 -  8116.382:   85.3882%  (       72)
00:07:12.093   8116.382 -  8166.794:   85.7029%  (       59)
00:07:12.093   8166.794 -  8217.206:   86.0282%  (       61)
00:07:12.093   8217.206 -  8267.618:   86.3588%  (       62)
00:07:12.093   8267.618 -  8318.031:   86.6788%  (       60)
00:07:12.093   8318.031 -  8368.443:   87.0360%  (       67)
00:07:12.093   8368.443 -  8418.855:   87.3507%  (       59)
00:07:12.093   8418.855 -  8469.268:   87.6920%  (       64)
00:07:12.093   8469.268 -  8519.680:   88.1133%  (       79)
00:07:12.093   8519.680 -  8570.092:   88.4812%  (       69)
00:07:12.093   8570.092 -  8620.505:   88.8225%  (       64)
00:07:12.093   8620.505 -  8670.917:   89.1852%  (       68)
00:07:12.093   8670.917 -  8721.329:   89.5318%  (       65)
00:07:12.093   8721.329 -  8771.742:   89.9051%  (       70)
00:07:12.093   8771.742 -  8822.154:   90.2784%  (       70)
00:07:12.093   8822.154 -  8872.566:   90.6463%  (       69)
00:07:12.093   8872.566 -  8922.978:   90.9823%  (       63)
00:07:12.093   8922.978 -  8973.391:   91.3556%  (       70)
00:07:12.093   8973.391 -  9023.803:   91.7022%  (       65)
00:07:12.093   9023.803 -  9074.215:   92.0862%  (       72)
00:07:12.093   9074.215 -  9124.628:   92.4061%  (       60)
00:07:12.093   9124.628 -  9175.040:   92.6941%  (       54)
00:07:12.093   9175.040 -  9225.452:   93.0034%  (       58)
00:07:12.093   9225.452 -  9275.865:   93.2860%  (       53)
00:07:12.093   9275.865 -  9326.277:   93.5687%  (       53)
00:07:12.093   9326.277 -  9376.689:   93.7713%  (       38)
00:07:12.093   9376.689 -  9427.102:   93.9846%  (       40)
00:07:12.093   9427.102 -  9477.514:   94.2140%  (       43)
00:07:12.093   9477.514 -  9527.926:   94.4219%  (       39)
00:07:12.093   9527.926 -  9578.338:   94.6459%  (       42)
00:07:12.093   9578.338 -  9628.751:   94.8432%  (       37)
00:07:12.093   9628.751 -  9679.163:   95.0459%  (       38)
00:07:12.093   9679.163 -  9729.575:   95.2005%  (       29)
00:07:12.093   9729.575 -  9779.988:   95.3285%  (       24)
00:07:12.093   9779.988 -  9830.400:   95.4298%  (       19)
00:07:12.093   9830.400 -  9880.812:   95.5311%  (       19)
00:07:12.093   9880.812 -  9931.225:   95.6485%  (       22)
00:07:12.093   9931.225 -  9981.637:   95.7711%  (       23)
00:07:12.093   9981.637 - 10032.049:   95.8831%  (       21)
00:07:12.093  10032.049 - 10082.462:   96.0218%  (       26)
00:07:12.093  10082.462 - 10132.874:   96.1764%  (       29)
00:07:12.093  10132.874 - 10183.286:   96.3311%  (       29)
00:07:12.093  10183.286 - 10233.698:   96.4910%  (       30)
00:07:12.093  10233.698 - 10284.111:   96.6617%  (       32)
00:07:12.093  10284.111 - 10334.523:   96.8163%  (       29)
00:07:12.093  10334.523 - 10384.935:   96.9977%  (       34)
00:07:12.093  10384.935 - 10435.348:   97.1523%  (       29)
00:07:12.093  10435.348 - 10485.760:   97.3123%  (       30)
00:07:12.093  10485.760 - 10536.172:   97.4669%  (       29)
00:07:12.093  10536.172 - 10586.585:   97.6109%  (       27)
00:07:12.093  10586.585 - 10636.997:   97.7549%  (       27)
00:07:12.093  10636.997 - 10687.409:   97.9256%  (       32)
00:07:12.093  10687.409 - 10737.822:   98.0802%  (       29)
00:07:12.093  10737.822 - 10788.234:   98.2242%  (       27)
00:07:12.093  10788.234 - 10838.646:   98.3788%  (       29)
00:07:12.093  10838.646 - 10889.058:   98.4962%  (       22)
00:07:12.093  10889.058 - 10939.471:   98.6135%  (       22)
00:07:12.093  10939.471 - 10989.883:   98.7095%  (       18)
00:07:12.093  10989.883 - 11040.295:   98.8001%  (       17)
00:07:12.093  11040.295 - 11090.708:   98.8961%  (       18)
00:07:12.093  11090.708 - 11141.120:   98.9761%  (       15)
00:07:12.093  11141.120 - 11191.532:   99.0348%  (       11)
00:07:12.093  11191.532 - 11241.945:   99.0988%  (       12)
00:07:12.093  11241.945 - 11292.357:   99.1414%  (        8)
00:07:12.093  11292.357 - 11342.769:   99.1734%  (        6)
00:07:12.093  11342.769 - 11393.182:   99.2161%  (        8)
00:07:12.093  11393.182 - 11443.594:   99.2481%  (        6)
00:07:12.093  11443.594 - 11494.006:   99.2854%  (        7)
00:07:12.093  11494.006 - 11544.418:   99.3121%  (        5)
00:07:12.093  11544.418 - 11594.831:   99.3174%  (        1)
00:07:12.093  25105.329 - 25206.154:   99.3281%  (        2)
00:07:12.093  25206.154 - 25306.978:   99.3494%  (        4)
00:07:12.093  25306.978 - 25407.803:   99.3761%  (        5)
00:07:12.093  25407.803 - 25508.628:   99.3974%  (        4)
00:07:12.093  25508.628 - 25609.452:   99.4187%  (        4)
00:07:12.093  25609.452 - 25710.277:   99.4347%  (        3)
00:07:12.093  25710.277 - 25811.102:   99.4561%  (        4)
00:07:12.093  25811.102 - 26012.751:   99.5041%  (        9)
00:07:12.093  26012.751 - 26214.400:   99.5414%  (        7)
00:07:12.093  26214.400 - 26416.049:   99.5894%  (        9)
00:07:12.093  26416.049 - 26617.698:   99.6320%  (        8)
00:07:12.093  26617.698 - 26819.348:   99.6587%  (        5)
00:07:12.093  29642.437 - 29844.086:   99.6747%  (        3)
00:07:12.093  29844.086 - 30045.735:   99.7227%  (        9)
00:07:12.093  30045.735 - 30247.385:   99.7654%  (        8)
00:07:12.093  30247.385 - 30449.034:   99.8134%  (        9)
00:07:12.093  30449.034 - 30650.683:   99.8560%  (        8)
00:07:12.093  30650.683 - 30852.332:   99.8987%  (        8)
00:07:12.093  30852.332 - 31053.982:   99.9467%  (        9)
00:07:12.093  31053.982 - 31255.631:   99.9947%  (        9)
00:07:12.093  31255.631 - 31457.280:  100.0000%  (        1)
00:07:12.093  
00:07:12.093  Latency histogram for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:07:12.093  ==============================================================================
00:07:12.093         Range in us     Cumulative    IO count
00:07:12.093   4990.818 -  5016.025:    0.0160%  (        3)
00:07:12.093   5016.025 -  5041.231:    0.0907%  (       14)
00:07:12.093   5041.231 -  5066.437:    0.1653%  (       14)
00:07:12.093   5066.437 -  5091.643:    0.2400%  (       14)
00:07:12.093   5091.643 -  5116.849:    0.3733%  (       25)
00:07:12.093   5116.849 -  5142.055:    0.5919%  (       41)
00:07:12.093   5142.055 -  5167.262:    0.8266%  (       44)
00:07:12.093   5167.262 -  5192.468:    1.1039%  (       52)
00:07:12.093   5192.468 -  5217.674:    1.3385%  (       44)
00:07:12.093   5217.674 -  5242.880:    1.5625%  (       42)
00:07:12.093   5242.880 -  5268.086:    1.8398%  (       52)
00:07:12.093   5268.086 -  5293.292:    2.1598%  (       60)
00:07:12.093   5293.292 -  5318.498:    2.4637%  (       57)
00:07:12.093   5318.498 -  5343.705:    2.7730%  (       58)
00:07:12.093   5343.705 -  5368.911:    3.1090%  (       63)
00:07:12.093   5368.911 -  5394.117:    3.4290%  (       60)
00:07:12.093   5394.117 -  5419.323:    3.8129%  (       72)
00:07:12.093   5419.323 -  5444.529:    4.1916%  (       71)
00:07:12.093   5444.529 -  5469.735:    4.5968%  (       76)
00:07:12.093   5469.735 -  5494.942:    5.0715%  (       89)
00:07:12.093   5494.942 -  5520.148:    5.5461%  (       89)
00:07:12.093   5520.148 -  5545.354:    6.0900%  (      102)
00:07:12.093   5545.354 -  5570.560:    6.7833%  (      130)
00:07:12.093   5570.560 -  5595.766:    7.6259%  (      158)
00:07:12.093   5595.766 -  5620.972:    8.4791%  (      160)
00:07:12.093   5620.972 -  5646.178:    9.2897%  (      152)
00:07:12.093   5646.178 -  5671.385:   10.1376%  (      159)
00:07:12.093   5671.385 -  5696.591:   11.0388%  (      169)
00:07:12.093   5696.591 -  5721.797:   11.9827%  (      177)
00:07:12.093   5721.797 -  5747.003:   13.0813%  (      206)
00:07:12.093   5747.003 -  5772.209:   14.3345%  (      235)
00:07:12.093   5772.209 -  5797.415:   15.7263%  (      261)
00:07:12.093   5797.415 -  5822.622:   17.2195%  (      280)
00:07:12.093   5822.622 -  5847.828:   18.9526%  (      325)
00:07:12.093   5847.828 -  5873.034:   20.6378%  (      316)
00:07:12.093   5873.034 -  5898.240:   22.3176%  (      315)
00:07:12.093   5898.240 -  5923.446:   24.0241%  (      320)
00:07:12.093   5923.446 -  5948.652:   25.8159%  (      336)
00:07:12.093   5948.652 -  5973.858:   27.6451%  (      343)
00:07:12.093   5973.858 -  5999.065:   29.4955%  (      347)
00:07:12.093   5999.065 -  6024.271:   31.3673%  (      351)
00:07:12.093   6024.271 -  6049.477:   33.1965%  (      343)
00:07:12.093   6049.477 -  6074.683:   35.1429%  (      365)
00:07:12.093   6074.683 -  6099.889:   37.0467%  (      357)
00:07:12.093   6099.889 -  6125.095:   38.9985%  (      366)
00:07:12.093   6125.095 -  6150.302:   40.8276%  (      343)
00:07:12.093   6150.302 -  6175.508:   42.7154%  (      354)
00:07:12.093   6175.508 -  6200.714:   44.5339%  (      341)
00:07:12.093   6200.714 -  6225.920:   46.3791%  (      346)
00:07:12.093   6225.920 -  6251.126:   48.2242%  (      346)
00:07:12.093   6251.126 -  6276.332:   50.1120%  (      354)
00:07:12.093   6276.332 -  6301.538:   51.9091%  (      337)
00:07:12.093   6301.538 -  6326.745:   53.6529%  (      327)
00:07:12.093   6326.745 -  6351.951:   55.2901%  (      307)
00:07:12.093   6351.951 -  6377.157:   56.9006%  (      302)
00:07:12.093   6377.157 -  6402.363:   58.4898%  (      298)
00:07:12.093   6402.363 -  6427.569:   59.9829%  (      280)
00:07:12.093   6427.569 -  6452.775:   61.5508%  (      294)
00:07:12.093   6452.775 -  6503.188:   64.5851%  (      569)
00:07:12.093   6503.188 -  6553.600:   67.3262%  (      514)
00:07:12.093   6553.600 -  6604.012:   69.7579%  (      456)
00:07:12.094   6604.012 -  6654.425:   71.5977%  (      345)
00:07:12.094   6654.425 -  6704.837:   73.0535%  (      273)
00:07:12.094   6704.837 -  6755.249:   74.1681%  (      209)
00:07:12.094   6755.249 -  6805.662:   75.1120%  (      177)
00:07:12.094   6805.662 -  6856.074:   75.8106%  (      131)
00:07:12.094   6856.074 -  6906.486:   76.3705%  (      105)
00:07:12.094   6906.486 -  6956.898:   76.8931%  (       98)
00:07:12.094   6956.898 -  7007.311:   77.3304%  (       82)
00:07:12.094   7007.311 -  7057.723:   77.7944%  (       87)
00:07:12.094   7057.723 -  7108.135:   78.2050%  (       77)
00:07:12.094   7108.135 -  7158.548:   78.5836%  (       71)
00:07:12.094   7158.548 -  7208.960:   78.9569%  (       70)
00:07:12.094   7208.960 -  7259.372:   79.3622%  (       76)
00:07:12.094   7259.372 -  7309.785:   79.7675%  (       76)
00:07:12.094   7309.785 -  7360.197:   80.0928%  (       61)
00:07:12.094   7360.197 -  7410.609:   80.4181%  (       61)
00:07:12.094   7410.609 -  7461.022:   80.7167%  (       56)
00:07:12.094   7461.022 -  7511.434:   80.9887%  (       51)
00:07:12.094   7511.434 -  7561.846:   81.3140%  (       61)
00:07:12.094   7561.846 -  7612.258:   81.5966%  (       53)
00:07:12.094   7612.258 -  7662.671:   81.8739%  (       52)
00:07:12.094   7662.671 -  7713.083:   82.2099%  (       63)
00:07:12.094   7713.083 -  7763.495:   82.5032%  (       55)
00:07:12.094   7763.495 -  7813.908:   82.7378%  (       44)
00:07:12.094   7813.908 -  7864.320:   83.0578%  (       60)
00:07:12.094   7864.320 -  7914.732:   83.3724%  (       59)
00:07:12.094   7914.732 -  7965.145:   83.6924%  (       60)
00:07:12.094   7965.145 -  8015.557:   83.9750%  (       53)
00:07:12.094   8015.557 -  8065.969:   84.3217%  (       65)
00:07:12.094   8065.969 -  8116.382:   84.7110%  (       73)
00:07:12.094   8116.382 -  8166.794:   85.1109%  (       75)
00:07:12.094   8166.794 -  8217.206:   85.4629%  (       66)
00:07:12.094   8217.206 -  8267.618:   85.8628%  (       75)
00:07:12.094   8267.618 -  8318.031:   86.2308%  (       69)
00:07:12.094   8318.031 -  8368.443:   86.6201%  (       73)
00:07:12.094   8368.443 -  8418.855:   87.0147%  (       74)
00:07:12.094   8418.855 -  8469.268:   87.4040%  (       73)
00:07:12.094   8469.268 -  8519.680:   87.8146%  (       77)
00:07:12.094   8519.680 -  8570.092:   88.1986%  (       72)
00:07:12.094   8570.092 -  8620.505:   88.5985%  (       75)
00:07:12.094   8620.505 -  8670.917:   89.0092%  (       77)
00:07:12.094   8670.917 -  8721.329:   89.4091%  (       75)
00:07:12.094   8721.329 -  8771.742:   89.7931%  (       72)
00:07:12.094   8771.742 -  8822.154:   90.2197%  (       80)
00:07:12.094   8822.154 -  8872.566:   90.5983%  (       71)
00:07:12.094   8872.566 -  8922.978:   90.8970%  (       56)
00:07:12.094   8922.978 -  8973.391:   91.2436%  (       65)
00:07:12.094   8973.391 -  9023.803:   91.5636%  (       60)
00:07:12.094   9023.803 -  9074.215:   91.8675%  (       57)
00:07:12.094   9074.215 -  9124.628:   92.1448%  (       52)
00:07:12.094   9124.628 -  9175.040:   92.4061%  (       49)
00:07:12.094   9175.040 -  9225.452:   92.6355%  (       43)
00:07:12.094   9225.452 -  9275.865:   92.8808%  (       46)
00:07:12.094   9275.865 -  9326.277:   93.1474%  (       50)
00:07:12.094   9326.277 -  9376.689:   93.3500%  (       38)
00:07:12.094   9376.689 -  9427.102:   93.5314%  (       34)
00:07:12.094   9427.102 -  9477.514:   93.7447%  (       40)
00:07:12.094   9477.514 -  9527.926:   93.9633%  (       41)
00:07:12.094   9527.926 -  9578.338:   94.2033%  (       45)
00:07:12.094   9578.338 -  9628.751:   94.3899%  (       35)
00:07:12.094   9628.751 -  9679.163:   94.5926%  (       38)
00:07:12.094   9679.163 -  9729.575:   94.7686%  (       33)
00:07:12.094   9729.575 -  9779.988:   94.9392%  (       32)
00:07:12.094   9779.988 -  9830.400:   95.0672%  (       24)
00:07:12.094   9830.400 -  9880.812:   95.1738%  (       20)
00:07:12.094   9880.812 -  9931.225:   95.3125%  (       26)
00:07:12.094   9931.225 -  9981.637:   95.4618%  (       28)
00:07:12.094   9981.637 - 10032.049:   95.6378%  (       33)
00:07:12.094  10032.049 - 10082.462:   95.8351%  (       37)
00:07:12.094  10082.462 - 10132.874:   96.0271%  (       36)
00:07:12.094  10132.874 - 10183.286:   96.2191%  (       36)
00:07:12.094  10183.286 - 10233.698:   96.4377%  (       41)
00:07:12.094  10233.698 - 10284.111:   96.6617%  (       42)
00:07:12.094  10284.111 - 10334.523:   96.8590%  (       37)
00:07:12.094  10334.523 - 10384.935:   97.0190%  (       30)
00:07:12.094  10384.935 - 10435.348:   97.1630%  (       27)
00:07:12.094  10435.348 - 10485.760:   97.3123%  (       28)
00:07:12.094  10485.760 - 10536.172:   97.4669%  (       29)
00:07:12.094  10536.172 - 10586.585:   97.6536%  (       35)
00:07:12.094  10586.585 - 10636.997:   97.8082%  (       29)
00:07:12.094  10636.997 - 10687.409:   97.9522%  (       27)
00:07:12.094  10687.409 - 10737.822:   98.0855%  (       25)
00:07:12.094  10737.822 - 10788.234:   98.2242%  (       26)
00:07:12.094  10788.234 - 10838.646:   98.3415%  (       22)
00:07:12.094  10838.646 - 10889.058:   98.4535%  (       21)
00:07:12.094  10889.058 - 10939.471:   98.5442%  (       17)
00:07:12.094  10939.471 - 10989.883:   98.6295%  (       16)
00:07:12.094  10989.883 - 11040.295:   98.7095%  (       15)
00:07:12.094  11040.295 - 11090.708:   98.7895%  (       15)
00:07:12.094  11090.708 - 11141.120:   98.8641%  (       14)
00:07:12.094  11141.120 - 11191.532:   98.9281%  (       12)
00:07:12.094  11191.532 - 11241.945:   99.0028%  (       14)
00:07:12.094  11241.945 - 11292.357:   99.0668%  (       12)
00:07:12.094  11292.357 - 11342.769:   99.1201%  (       10)
00:07:12.094  11342.769 - 11393.182:   99.1734%  (       10)
00:07:12.094  11393.182 - 11443.594:   99.1948%  (        4)
00:07:12.094  11443.594 - 11494.006:   99.2108%  (        3)
00:07:12.094  11494.006 - 11544.418:   99.2267%  (        3)
00:07:12.094  11544.418 - 11594.831:   99.2481%  (        4)
00:07:12.094  11594.831 - 11645.243:   99.2641%  (        3)
00:07:12.094  11645.243 - 11695.655:   99.2854%  (        4)
00:07:12.094  11695.655 - 11746.068:   99.3067%  (        4)
00:07:12.094  11746.068 - 11796.480:   99.3174%  (        2)
00:07:12.094  23693.785 - 23794.609:   99.3387%  (        4)
00:07:12.094  23794.609 - 23895.434:   99.3601%  (        4)
00:07:12.094  23895.434 - 23996.258:   99.3814%  (        4)
00:07:12.094  23996.258 - 24097.083:   99.4027%  (        4)
00:07:12.094  24097.083 - 24197.908:   99.4241%  (        4)
00:07:12.094  24197.908 - 24298.732:   99.4454%  (        4)
00:07:12.094  24298.732 - 24399.557:   99.4667%  (        4)
00:07:12.094  24399.557 - 24500.382:   99.4881%  (        4)
00:07:12.094  24500.382 - 24601.206:   99.5094%  (        4)
00:07:12.094  24601.206 - 24702.031:   99.5307%  (        4)
00:07:12.094  24702.031 - 24802.855:   99.5520%  (        4)
00:07:12.094  24802.855 - 24903.680:   99.5680%  (        3)
00:07:12.094  24903.680 - 25004.505:   99.5894%  (        4)
00:07:12.094  25004.505 - 25105.329:   99.6107%  (        4)
00:07:12.094  25105.329 - 25206.154:   99.6320%  (        4)
00:07:12.094  25206.154 - 25306.978:   99.6587%  (        5)
00:07:12.094  28230.892 - 28432.542:   99.6747%  (        3)
00:07:12.094  28432.542 - 28634.191:   99.7174%  (        8)
00:07:12.094  28634.191 - 28835.840:   99.7600%  (        8)
00:07:12.094  28835.840 - 29037.489:   99.8027%  (        8)
00:07:12.094  29037.489 - 29239.138:   99.8453%  (        8)
00:07:12.094  29239.138 - 29440.788:   99.8880%  (        8)
00:07:12.094  29440.788 - 29642.437:   99.9360%  (        9)
00:07:12.094  29642.437 - 29844.086:   99.9787%  (        8)
00:07:12.094  29844.086 - 30045.735:  100.0000%  (        4)
00:07:12.094  
00:07:12.094  Latency histogram for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:07:12.094  ==============================================================================
00:07:12.094         Range in us     Cumulative    IO count
00:07:12.094   4965.612 -  4990.818:    0.0107%  (        2)
00:07:12.094   4990.818 -  5016.025:    0.0373%  (        5)
00:07:12.094   5016.025 -  5041.231:    0.0427%  (        1)
00:07:12.094   5041.231 -  5066.437:    0.1333%  (       17)
00:07:12.094   5066.437 -  5091.643:    0.2506%  (       22)
00:07:12.094   5091.643 -  5116.849:    0.3893%  (       26)
00:07:12.094   5116.849 -  5142.055:    0.5226%  (       25)
00:07:12.094   5142.055 -  5167.262:    0.6826%  (       30)
00:07:12.094   5167.262 -  5192.468:    0.9279%  (       46)
00:07:12.094   5192.468 -  5217.674:    1.2159%  (       54)
00:07:12.094   5217.674 -  5242.880:    1.5678%  (       66)
00:07:12.094   5242.880 -  5268.086:    1.8345%  (       50)
00:07:12.094   5268.086 -  5293.292:    2.1118%  (       52)
00:07:12.094   5293.292 -  5318.498:    2.4531%  (       64)
00:07:12.094   5318.498 -  5343.705:    2.7464%  (       55)
00:07:12.094   5343.705 -  5368.911:    3.0343%  (       54)
00:07:12.094   5368.911 -  5394.117:    3.3490%  (       59)
00:07:12.094   5394.117 -  5419.323:    3.6743%  (       61)
00:07:12.094   5419.323 -  5444.529:    4.0849%  (       77)
00:07:12.094   5444.529 -  5469.735:    4.5328%  (       84)
00:07:12.094   5469.735 -  5494.942:    5.0021%  (       88)
00:07:12.094   5494.942 -  5520.148:    5.4927%  (       92)
00:07:12.094   5520.148 -  5545.354:    6.0953%  (      113)
00:07:12.094   5545.354 -  5570.560:    6.7566%  (      124)
00:07:12.094   5570.560 -  5595.766:    7.4872%  (      137)
00:07:12.094   5595.766 -  5620.972:    8.2125%  (      136)
00:07:12.094   5620.972 -  5646.178:    9.0977%  (      166)
00:07:12.094   5646.178 -  5671.385:    9.9296%  (      156)
00:07:12.094   5671.385 -  5696.591:   10.9375%  (      189)
00:07:12.094   5696.591 -  5721.797:   11.9721%  (      194)
00:07:12.094   5721.797 -  5747.003:   13.1506%  (      221)
00:07:12.094   5747.003 -  5772.209:   14.4038%  (      235)
00:07:12.094   5772.209 -  5797.415:   15.7637%  (      255)
00:07:12.094   5797.415 -  5822.622:   17.2515%  (      279)
00:07:12.094   5822.622 -  5847.828:   18.7553%  (      282)
00:07:12.094   5847.828 -  5873.034:   20.4245%  (      313)
00:07:12.094   5873.034 -  5898.240:   22.2536%  (      343)
00:07:12.094   5898.240 -  5923.446:   24.0401%  (      335)
00:07:12.094   5923.446 -  5948.652:   25.8106%  (      332)
00:07:12.094   5948.652 -  5973.858:   27.5171%  (      320)
00:07:12.094   5973.858 -  5999.065:   29.2502%  (      325)
00:07:12.094   5999.065 -  6024.271:   31.0847%  (      344)
00:07:12.094   6024.271 -  6049.477:   33.0578%  (      370)
00:07:12.094   6049.477 -  6074.683:   35.0309%  (      370)
00:07:12.094   6074.683 -  6099.889:   37.0147%  (      372)
00:07:12.094   6099.889 -  6125.095:   38.9558%  (      364)
00:07:12.094   6125.095 -  6150.302:   40.9236%  (      369)
00:07:12.094   6150.302 -  6175.508:   42.8701%  (      365)
00:07:12.094   6175.508 -  6200.714:   44.6566%  (      335)
00:07:12.094   6200.714 -  6225.920:   46.5444%  (      354)
00:07:12.094   6225.920 -  6251.126:   48.3682%  (      342)
00:07:12.094   6251.126 -  6276.332:   50.1173%  (      328)
00:07:12.094   6276.332 -  6301.538:   51.8931%  (      333)
00:07:12.094   6301.538 -  6326.745:   53.6263%  (      325)
00:07:12.094   6326.745 -  6351.951:   55.2901%  (      312)
00:07:12.095   6351.951 -  6377.157:   56.9379%  (      309)
00:07:12.095   6377.157 -  6402.363:   58.5964%  (      311)
00:07:12.095   6402.363 -  6427.569:   60.1216%  (      286)
00:07:12.095   6427.569 -  6452.775:   61.6361%  (      284)
00:07:12.095   6452.775 -  6503.188:   64.6491%  (      565)
00:07:12.095   6503.188 -  6553.600:   67.4595%  (      527)
00:07:12.095   6553.600 -  6604.012:   69.8912%  (      456)
00:07:12.095   6604.012 -  6654.425:   71.8323%  (      364)
00:07:12.095   6654.425 -  6704.837:   73.3148%  (      278)
00:07:12.095   6704.837 -  6755.249:   74.4294%  (      209)
00:07:12.095   6755.249 -  6805.662:   75.3360%  (      170)
00:07:12.095   6805.662 -  6856.074:   76.1145%  (      146)
00:07:12.095   6856.074 -  6906.486:   76.6798%  (      106)
00:07:12.095   6906.486 -  6956.898:   77.0744%  (       74)
00:07:12.095   6956.898 -  7007.311:   77.4584%  (       72)
00:07:12.095   7007.311 -  7057.723:   77.8690%  (       77)
00:07:12.095   7057.723 -  7108.135:   78.2263%  (       67)
00:07:12.095   7108.135 -  7158.548:   78.6209%  (       74)
00:07:12.095   7158.548 -  7208.960:   79.0209%  (       75)
00:07:12.095   7208.960 -  7259.372:   79.3995%  (       71)
00:07:12.095   7259.372 -  7309.785:   79.8048%  (       76)
00:07:12.095   7309.785 -  7360.197:   80.1515%  (       65)
00:07:12.095   7360.197 -  7410.609:   80.4661%  (       59)
00:07:12.095   7410.609 -  7461.022:   80.7487%  (       53)
00:07:12.095   7461.022 -  7511.434:   80.9940%  (       46)
00:07:12.095   7511.434 -  7561.846:   81.2500%  (       48)
00:07:12.095   7561.846 -  7612.258:   81.5006%  (       47)
00:07:12.095   7612.258 -  7662.671:   81.7406%  (       45)
00:07:12.095   7662.671 -  7713.083:   81.9753%  (       44)
00:07:12.095   7713.083 -  7763.495:   82.2259%  (       47)
00:07:12.095   7763.495 -  7813.908:   82.5352%  (       58)
00:07:12.095   7813.908 -  7864.320:   82.8285%  (       55)
00:07:12.095   7864.320 -  7914.732:   83.1378%  (       58)
00:07:12.095   7914.732 -  7965.145:   83.4418%  (       57)
00:07:12.095   7965.145 -  8015.557:   83.7244%  (       53)
00:07:12.095   8015.557 -  8065.969:   84.0497%  (       61)
00:07:12.095   8065.969 -  8116.382:   84.4870%  (       82)
00:07:12.095   8116.382 -  8166.794:   84.8603%  (       70)
00:07:12.095   8166.794 -  8217.206:   85.1962%  (       63)
00:07:12.095   8217.206 -  8267.618:   85.5375%  (       64)
00:07:12.095   8267.618 -  8318.031:   85.9962%  (       86)
00:07:12.095   8318.031 -  8368.443:   86.4548%  (       86)
00:07:12.095   8368.443 -  8418.855:   86.8974%  (       83)
00:07:12.095   8418.855 -  8469.268:   87.3720%  (       89)
00:07:12.095   8469.268 -  8519.680:   87.8146%  (       83)
00:07:12.095   8519.680 -  8570.092:   88.3052%  (       92)
00:07:12.095   8570.092 -  8620.505:   88.7372%  (       81)
00:07:12.095   8620.505 -  8670.917:   89.1105%  (       70)
00:07:12.095   8670.917 -  8721.329:   89.5051%  (       74)
00:07:12.095   8721.329 -  8771.742:   89.9051%  (       75)
00:07:12.095   8771.742 -  8822.154:   90.2677%  (       68)
00:07:12.095   8822.154 -  8872.566:   90.6410%  (       70)
00:07:12.095   8872.566 -  8922.978:   90.9183%  (       52)
00:07:12.095   8922.978 -  8973.391:   91.1903%  (       51)
00:07:12.095   8973.391 -  9023.803:   91.4622%  (       51)
00:07:12.095   9023.803 -  9074.215:   91.7289%  (       50)
00:07:12.095   9074.215 -  9124.628:   91.9742%  (       46)
00:07:12.095   9124.628 -  9175.040:   92.2622%  (       54)
00:07:12.095   9175.040 -  9225.452:   92.5021%  (       45)
00:07:12.095   9225.452 -  9275.865:   92.7261%  (       42)
00:07:12.095   9275.865 -  9326.277:   92.9714%  (       46)
00:07:12.095   9326.277 -  9376.689:   93.1847%  (       40)
00:07:12.095   9376.689 -  9427.102:   93.3714%  (       35)
00:07:12.095   9427.102 -  9477.514:   93.5474%  (       33)
00:07:12.095   9477.514 -  9527.926:   93.7447%  (       37)
00:07:12.095   9527.926 -  9578.338:   93.9420%  (       37)
00:07:12.095   9578.338 -  9628.751:   94.1340%  (       36)
00:07:12.095   9628.751 -  9679.163:   94.3366%  (       38)
00:07:12.095   9679.163 -  9729.575:   94.5286%  (       36)
00:07:12.095   9729.575 -  9779.988:   94.7419%  (       40)
00:07:12.095   9779.988 -  9830.400:   94.9445%  (       38)
00:07:12.095   9830.400 -  9880.812:   95.1578%  (       40)
00:07:12.095   9880.812 -  9931.225:   95.3605%  (       38)
00:07:12.095   9931.225 -  9981.637:   95.5791%  (       41)
00:07:12.095   9981.637 - 10032.049:   95.7818%  (       38)
00:07:12.095  10032.049 - 10082.462:   96.0218%  (       45)
00:07:12.095  10082.462 - 10132.874:   96.2191%  (       37)
00:07:12.095  10132.874 - 10183.286:   96.4164%  (       37)
00:07:12.095  10183.286 - 10233.698:   96.5977%  (       34)
00:07:12.095  10233.698 - 10284.111:   96.7843%  (       35)
00:07:12.095  10284.111 - 10334.523:   96.9763%  (       36)
00:07:12.095  10334.523 - 10384.935:   97.1470%  (       32)
00:07:12.095  10384.935 - 10435.348:   97.3336%  (       35)
00:07:12.095  10435.348 - 10485.760:   97.5096%  (       33)
00:07:12.095  10485.760 - 10536.172:   97.6749%  (       31)
00:07:12.095  10536.172 - 10586.585:   97.8509%  (       33)
00:07:12.095  10586.585 - 10636.997:   98.0322%  (       34)
00:07:12.095  10636.997 - 10687.409:   98.1869%  (       29)
00:07:12.095  10687.409 - 10737.822:   98.3468%  (       30)
00:07:12.095  10737.822 - 10788.234:   98.4695%  (       23)
00:07:12.095  10788.234 - 10838.646:   98.5602%  (       17)
00:07:12.095  10838.646 - 10889.058:   98.6135%  (       10)
00:07:12.095  10889.058 - 10939.471:   98.6508%  (        7)
00:07:12.095  10939.471 - 10989.883:   98.6935%  (        8)
00:07:12.095  10989.883 - 11040.295:   98.7308%  (        7)
00:07:12.095  11040.295 - 11090.708:   98.7681%  (        7)
00:07:12.095  11090.708 - 11141.120:   98.8055%  (        7)
00:07:12.095  11141.120 - 11191.532:   98.8588%  (       10)
00:07:12.095  11191.532 - 11241.945:   98.9174%  (       11)
00:07:12.095  11241.945 - 11292.357:   98.9761%  (       11)
00:07:12.095  11292.357 - 11342.769:   99.0401%  (       12)
00:07:12.095  11342.769 - 11393.182:   99.0881%  (        9)
00:07:12.095  11393.182 - 11443.594:   99.1041%  (        3)
00:07:12.095  11443.594 - 11494.006:   99.1148%  (        2)
00:07:12.095  11494.006 - 11544.418:   99.1361%  (        4)
00:07:12.095  11544.418 - 11594.831:   99.1574%  (        4)
00:07:12.095  11594.831 - 11645.243:   99.1788%  (        4)
00:07:12.095  11645.243 - 11695.655:   99.1948%  (        3)
00:07:12.095  11695.655 - 11746.068:   99.2161%  (        4)
00:07:12.095  11746.068 - 11796.480:   99.2374%  (        4)
00:07:12.095  11796.480 - 11846.892:   99.2587%  (        4)
00:07:12.095  11846.892 - 11897.305:   99.2801%  (        4)
00:07:12.095  11897.305 - 11947.717:   99.2961%  (        3)
00:07:12.095  11947.717 - 11998.129:   99.3174%  (        4)
00:07:12.095  21979.766 - 22080.591:   99.3281%  (        2)
00:07:12.095  22080.591 - 22181.415:   99.3494%  (        4)
00:07:12.095  22181.415 - 22282.240:   99.3707%  (        4)
00:07:12.095  22282.240 - 22383.065:   99.3921%  (        4)
00:07:12.095  22383.065 - 22483.889:   99.4134%  (        4)
00:07:12.095  22483.889 - 22584.714:   99.4347%  (        4)
00:07:12.095  22584.714 - 22685.538:   99.4614%  (        5)
00:07:12.095  22685.538 - 22786.363:   99.4827%  (        4)
00:07:12.095  22786.363 - 22887.188:   99.5041%  (        4)
00:07:12.095  22887.188 - 22988.012:   99.5254%  (        4)
00:07:12.095  22988.012 - 23088.837:   99.5520%  (        5)
00:07:12.095  23088.837 - 23189.662:   99.5734%  (        4)
00:07:12.095  23189.662 - 23290.486:   99.5947%  (        4)
00:07:12.095  23290.486 - 23391.311:   99.6160%  (        4)
00:07:12.095  23391.311 - 23492.135:   99.6374%  (        4)
00:07:12.095  23492.135 - 23592.960:   99.6587%  (        4)
00:07:12.095  26416.049 - 26617.698:   99.6694%  (        2)
00:07:12.095  26617.698 - 26819.348:   99.7067%  (        7)
00:07:12.095  26819.348 - 27020.997:   99.7547%  (        9)
00:07:12.095  27020.997 - 27222.646:   99.7974%  (        8)
00:07:12.095  27222.646 - 27424.295:   99.8453%  (        9)
00:07:12.095  27424.295 - 27625.945:   99.8880%  (        8)
00:07:12.095  27625.945 - 27827.594:   99.9307%  (        8)
00:07:12.095  27827.594 - 28029.243:   99.9733%  (        8)
00:07:12.095  28029.243 - 28230.892:  100.0000%  (        5)
00:07:12.095  
00:07:12.095  Latency histogram for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:07:12.095  ==============================================================================
00:07:12.095         Range in us     Cumulative    IO count
00:07:12.095   4965.612 -  4990.818:    0.0107%  (        2)
00:07:12.096   4990.818 -  5016.025:    0.0320%  (        4)
00:07:12.096   5016.025 -  5041.231:    0.0853%  (       10)
00:07:12.096   5041.231 -  5066.437:    0.1440%  (       11)
00:07:12.096   5066.437 -  5091.643:    0.2293%  (       16)
00:07:12.096   5091.643 -  5116.849:    0.3680%  (       26)
00:07:12.096   5116.849 -  5142.055:    0.5599%  (       36)
00:07:12.096   5142.055 -  5167.262:    0.7733%  (       40)
00:07:12.096   5167.262 -  5192.468:    1.0132%  (       45)
00:07:12.096   5192.468 -  5217.674:    1.2372%  (       42)
00:07:12.096   5217.674 -  5242.880:    1.4452%  (       39)
00:07:12.096   5242.880 -  5268.086:    1.7172%  (       51)
00:07:12.096   5268.086 -  5293.292:    2.0584%  (       64)
00:07:12.096   5293.292 -  5318.498:    2.4477%  (       73)
00:07:12.096   5318.498 -  5343.705:    2.7464%  (       56)
00:07:12.096   5343.705 -  5368.911:    3.0823%  (       63)
00:07:12.096   5368.911 -  5394.117:    3.4503%  (       69)
00:07:12.096   5394.117 -  5419.323:    3.7596%  (       58)
00:07:12.096   5419.323 -  5444.529:    4.1596%  (       75)
00:07:12.096   5444.529 -  5469.735:    4.5702%  (       77)
00:07:12.096   5469.735 -  5494.942:    4.9915%  (       79)
00:07:12.096   5494.942 -  5520.148:    5.4821%  (       92)
00:07:12.096   5520.148 -  5545.354:    6.0154%  (      100)
00:07:12.096   5545.354 -  5570.560:    6.6820%  (      125)
00:07:12.096   5570.560 -  5595.766:    7.5299%  (      159)
00:07:12.096   5595.766 -  5620.972:    8.2818%  (      141)
00:07:12.096   5620.972 -  5646.178:    9.1084%  (      155)
00:07:12.096   5646.178 -  5671.385:   10.1216%  (      190)
00:07:12.096   5671.385 -  5696.591:   11.1295%  (      189)
00:07:12.096   5696.591 -  5721.797:   12.0787%  (      178)
00:07:12.096   5721.797 -  5747.003:   13.1346%  (      198)
00:07:12.096   5747.003 -  5772.209:   14.3505%  (      228)
00:07:12.096   5772.209 -  5797.415:   15.7370%  (      260)
00:07:12.096   5797.415 -  5822.622:   17.1235%  (      260)
00:07:12.096   5822.622 -  5847.828:   18.7767%  (      310)
00:07:12.096   5847.828 -  5873.034:   20.4032%  (      305)
00:07:12.096   5873.034 -  5898.240:   22.0670%  (      312)
00:07:12.096   5898.240 -  5923.446:   23.7948%  (      324)
00:07:12.096   5923.446 -  5948.652:   25.5973%  (      338)
00:07:12.096   5948.652 -  5973.858:   27.4104%  (      340)
00:07:12.096   5973.858 -  5999.065:   29.3142%  (      357)
00:07:12.096   5999.065 -  6024.271:   31.1060%  (      336)
00:07:12.096   6024.271 -  6049.477:   32.9991%  (      355)
00:07:12.096   6049.477 -  6074.683:   34.8603%  (      349)
00:07:12.096   6074.683 -  6099.889:   36.6574%  (      337)
00:07:12.096   6099.889 -  6125.095:   38.4972%  (      345)
00:07:12.096   6125.095 -  6150.302:   40.3850%  (      354)
00:07:12.096   6150.302 -  6175.508:   42.2302%  (      346)
00:07:12.096   6175.508 -  6200.714:   44.0433%  (      340)
00:07:12.096   6200.714 -  6225.920:   45.9471%  (      357)
00:07:12.096   6225.920 -  6251.126:   47.8029%  (      348)
00:07:12.096   6251.126 -  6276.332:   49.5467%  (      327)
00:07:12.096   6276.332 -  6301.538:   51.2745%  (      324)
00:07:12.096   6301.538 -  6326.745:   52.9490%  (      314)
00:07:12.096   6326.745 -  6351.951:   54.6022%  (      310)
00:07:12.096   6351.951 -  6377.157:   56.1753%  (      295)
00:07:12.096   6377.157 -  6402.363:   57.7432%  (      294)
00:07:12.096   6402.363 -  6427.569:   59.3483%  (      301)
00:07:12.096   6427.569 -  6452.775:   60.8735%  (      286)
00:07:12.096   6452.775 -  6503.188:   63.8332%  (      555)
00:07:12.096   6503.188 -  6553.600:   66.5796%  (      515)
00:07:12.096   6553.600 -  6604.012:   69.0166%  (      457)
00:07:12.096   6604.012 -  6654.425:   70.9631%  (      365)
00:07:12.096   6654.425 -  6704.837:   72.5203%  (      292)
00:07:12.096   6704.837 -  6755.249:   73.5548%  (      194)
00:07:12.096   6755.249 -  6805.662:   74.4827%  (      174)
00:07:12.096   6805.662 -  6856.074:   75.3040%  (      154)
00:07:12.096   6856.074 -  6906.486:   75.9599%  (      123)
00:07:12.096   6906.486 -  6956.898:   76.5465%  (      110)
00:07:12.096   6956.898 -  7007.311:   77.0691%  (       98)
00:07:12.096   7007.311 -  7057.723:   77.5704%  (       94)
00:07:12.096   7057.723 -  7108.135:   78.0183%  (       84)
00:07:12.096   7108.135 -  7158.548:   78.5036%  (       91)
00:07:12.096   7158.548 -  7208.960:   78.9409%  (       82)
00:07:12.096   7208.960 -  7259.372:   79.3409%  (       75)
00:07:12.096   7259.372 -  7309.785:   79.7035%  (       68)
00:07:12.096   7309.785 -  7360.197:   80.0715%  (       69)
00:07:12.096   7360.197 -  7410.609:   80.3648%  (       55)
00:07:12.096   7410.609 -  7461.022:   80.6901%  (       61)
00:07:12.096   7461.022 -  7511.434:   80.9940%  (       57)
00:07:12.096   7511.434 -  7561.846:   81.2553%  (       49)
00:07:12.096   7561.846 -  7612.258:   81.5380%  (       53)
00:07:12.096   7612.258 -  7662.671:   81.8206%  (       53)
00:07:12.096   7662.671 -  7713.083:   82.0499%  (       43)
00:07:12.096   7713.083 -  7763.495:   82.3166%  (       50)
00:07:12.096   7763.495 -  7813.908:   82.6419%  (       61)
00:07:12.096   7813.908 -  7864.320:   82.9138%  (       51)
00:07:12.096   7864.320 -  7914.732:   83.1645%  (       47)
00:07:12.096   7914.732 -  7965.145:   83.4364%  (       51)
00:07:12.096   7965.145 -  8015.557:   83.8044%  (       69)
00:07:12.096   8015.557 -  8065.969:   84.1724%  (       69)
00:07:12.096   8065.969 -  8116.382:   84.5723%  (       75)
00:07:12.096   8116.382 -  8166.794:   84.9509%  (       71)
00:07:12.096   8166.794 -  8217.206:   85.3136%  (       68)
00:07:12.096   8217.206 -  8267.618:   85.7189%  (       76)
00:07:12.096   8267.618 -  8318.031:   86.0922%  (       70)
00:07:12.096   8318.031 -  8368.443:   86.4708%  (       71)
00:07:12.096   8368.443 -  8418.855:   86.8601%  (       73)
00:07:12.096   8418.855 -  8469.268:   87.2814%  (       79)
00:07:12.096   8469.268 -  8519.680:   87.6866%  (       76)
00:07:12.096   8519.680 -  8570.092:   88.1133%  (       80)
00:07:12.096   8570.092 -  8620.505:   88.5719%  (       86)
00:07:12.096   8620.505 -  8670.917:   88.9718%  (       75)
00:07:12.096   8670.917 -  8721.329:   89.3771%  (       76)
00:07:12.096   8721.329 -  8771.742:   89.8038%  (       80)
00:07:12.096   8771.742 -  8822.154:   90.1824%  (       71)
00:07:12.096   8822.154 -  8872.566:   90.5823%  (       75)
00:07:12.096   8872.566 -  8922.978:   91.0250%  (       83)
00:07:12.096   8922.978 -  8973.391:   91.4089%  (       72)
00:07:12.096   8973.391 -  9023.803:   91.7609%  (       66)
00:07:12.096   9023.803 -  9074.215:   92.1075%  (       65)
00:07:12.096   9074.215 -  9124.628:   92.4541%  (       65)
00:07:12.096   9124.628 -  9175.040:   92.7794%  (       61)
00:07:12.096   9175.040 -  9225.452:   93.0674%  (       54)
00:07:12.096   9225.452 -  9275.865:   93.3020%  (       44)
00:07:12.096   9275.865 -  9326.277:   93.4994%  (       37)
00:07:12.096   9326.277 -  9376.689:   93.6967%  (       37)
00:07:12.096   9376.689 -  9427.102:   93.8940%  (       37)
00:07:12.096   9427.102 -  9477.514:   94.1020%  (       39)
00:07:12.096   9477.514 -  9527.926:   94.2886%  (       35)
00:07:12.096   9527.926 -  9578.338:   94.4753%  (       35)
00:07:12.096   9578.338 -  9628.751:   94.6566%  (       34)
00:07:12.096   9628.751 -  9679.163:   94.8539%  (       37)
00:07:12.096   9679.163 -  9729.575:   95.0139%  (       30)
00:07:12.096   9729.575 -  9779.988:   95.1952%  (       34)
00:07:12.096   9779.988 -  9830.400:   95.3818%  (       35)
00:07:12.096   9830.400 -  9880.812:   95.5418%  (       30)
00:07:12.096   9880.812 -  9931.225:   95.7071%  (       31)
00:07:12.096   9931.225 -  9981.637:   95.8938%  (       35)
00:07:12.096   9981.637 - 10032.049:   96.0804%  (       35)
00:07:12.096  10032.049 - 10082.462:   96.2404%  (       30)
00:07:12.096  10082.462 - 10132.874:   96.4377%  (       37)
00:07:12.096  10132.874 - 10183.286:   96.6084%  (       32)
00:07:12.096  10183.286 - 10233.698:   96.8217%  (       40)
00:07:12.096  10233.698 - 10284.111:   97.0030%  (       34)
00:07:12.096  10284.111 - 10334.523:   97.2163%  (       40)
00:07:12.096  10334.523 - 10384.935:   97.4083%  (       36)
00:07:12.096  10384.935 - 10435.348:   97.5843%  (       33)
00:07:12.096  10435.348 - 10485.760:   97.7282%  (       27)
00:07:12.096  10485.760 - 10536.172:   97.8669%  (       26)
00:07:12.096  10536.172 - 10586.585:   97.9949%  (       24)
00:07:12.096  10586.585 - 10636.997:   98.1122%  (       22)
00:07:12.096  10636.997 - 10687.409:   98.2189%  (       20)
00:07:12.096  10687.409 - 10737.822:   98.3148%  (       18)
00:07:12.096  10737.822 - 10788.234:   98.4108%  (       18)
00:07:12.096  10788.234 - 10838.646:   98.4855%  (       14)
00:07:12.096  10838.646 - 10889.058:   98.5655%  (       15)
00:07:12.096  10889.058 - 10939.471:   98.6348%  (       13)
00:07:12.096  10939.471 - 10989.883:   98.6935%  (       11)
00:07:12.096  10989.883 - 11040.295:   98.7575%  (       12)
00:07:12.096  11040.295 - 11090.708:   98.7948%  (        7)
00:07:12.096  11090.708 - 11141.120:   98.8321%  (        7)
00:07:12.096  11141.120 - 11191.532:   98.8695%  (        7)
00:07:12.096  11191.532 - 11241.945:   98.9121%  (        8)
00:07:12.096  11241.945 - 11292.357:   98.9548%  (        8)
00:07:12.096  11292.357 - 11342.769:   98.9974%  (        8)
00:07:12.096  11342.769 - 11393.182:   99.0348%  (        7)
00:07:12.096  11393.182 - 11443.594:   99.0668%  (        6)
00:07:12.096  11443.594 - 11494.006:   99.0934%  (        5)
00:07:12.096  11494.006 - 11544.418:   99.1148%  (        4)
00:07:12.096  11544.418 - 11594.831:   99.1361%  (        4)
00:07:12.096  11594.831 - 11645.243:   99.1574%  (        4)
00:07:12.096  11645.243 - 11695.655:   99.1734%  (        3)
00:07:12.096  11695.655 - 11746.068:   99.1948%  (        4)
00:07:12.096  11746.068 - 11796.480:   99.2108%  (        3)
00:07:12.096  11796.480 - 11846.892:   99.2321%  (        4)
00:07:12.096  11846.892 - 11897.305:   99.2534%  (        4)
00:07:12.096  11897.305 - 11947.717:   99.2747%  (        4)
00:07:12.096  11947.717 - 11998.129:   99.2907%  (        3)
00:07:12.096  11998.129 - 12048.542:   99.3121%  (        4)
00:07:12.096  12048.542 - 12098.954:   99.3174%  (        1)
00:07:12.096  20366.572 - 20467.397:   99.3334%  (        3)
00:07:12.096  20467.397 - 20568.222:   99.3547%  (        4)
00:07:12.096  20568.222 - 20669.046:   99.3761%  (        4)
00:07:12.096  20669.046 - 20769.871:   99.3974%  (        4)
00:07:12.097  20769.871 - 20870.695:   99.4187%  (        4)
00:07:12.097  20870.695 - 20971.520:   99.4401%  (        4)
00:07:12.097  20971.520 - 21072.345:   99.4561%  (        3)
00:07:12.097  21072.345 - 21173.169:   99.4827%  (        5)
00:07:12.097  21173.169 - 21273.994:   99.5041%  (        4)
00:07:12.097  21273.994 - 21374.818:   99.5254%  (        4)
00:07:12.097  21374.818 - 21475.643:   99.5467%  (        4)
00:07:12.097  21475.643 - 21576.468:   99.5734%  (        5)
00:07:12.097  21576.468 - 21677.292:   99.5947%  (        4)
00:07:12.097  21677.292 - 21778.117:   99.6160%  (        4)
00:07:12.097  21778.117 - 21878.942:   99.6427%  (        5)
00:07:12.097  21878.942 - 21979.766:   99.6587%  (        3)
00:07:12.097  24802.855 - 24903.680:   99.6800%  (        4)
00:07:12.097  24903.680 - 25004.505:   99.7014%  (        4)
00:07:12.097  25004.505 - 25105.329:   99.7280%  (        5)
00:07:12.097  25105.329 - 25206.154:   99.7440%  (        3)
00:07:12.097  25206.154 - 25306.978:   99.7654%  (        4)
00:07:12.097  25306.978 - 25407.803:   99.7867%  (        4)
00:07:12.097  25407.803 - 25508.628:   99.8080%  (        4)
00:07:12.097  25508.628 - 25609.452:   99.8347%  (        5)
00:07:12.097  25609.452 - 25710.277:   99.8560%  (        4)
00:07:12.097  25710.277 - 25811.102:   99.8773%  (        4)
00:07:12.097  25811.102 - 26012.751:   99.9253%  (        9)
00:07:12.097  26012.751 - 26214.400:   99.9680%  (        8)
00:07:12.097  26214.400 - 26416.049:  100.0000%  (        6)
00:07:12.097  
00:07:12.097  Latency histogram for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:07:12.097  ==============================================================================
00:07:12.097         Range in us     Cumulative    IO count
00:07:12.097   4990.818 -  5016.025:    0.0053%  (        1)
00:07:12.097   5016.025 -  5041.231:    0.0531%  (        9)
00:07:12.097   5041.231 -  5066.437:    0.1488%  (       18)
00:07:12.097   5066.437 -  5091.643:    0.2657%  (       22)
00:07:12.097   5091.643 -  5116.849:    0.3880%  (       23)
00:07:12.097   5116.849 -  5142.055:    0.5687%  (       34)
00:07:12.097   5142.055 -  5167.262:    0.7387%  (       32)
00:07:12.097   5167.262 -  5192.468:    1.0257%  (       54)
00:07:12.097   5192.468 -  5217.674:    1.2383%  (       40)
00:07:12.097   5217.674 -  5242.880:    1.4881%  (       47)
00:07:12.097   5242.880 -  5268.086:    1.7751%  (       54)
00:07:12.097   5268.086 -  5293.292:    2.0568%  (       53)
00:07:12.097   5293.292 -  5318.498:    2.3544%  (       56)
00:07:12.097   5318.498 -  5343.705:    2.6520%  (       56)
00:07:12.097   5343.705 -  5368.911:    3.0559%  (       76)
00:07:12.097   5368.911 -  5394.117:    3.4279%  (       70)
00:07:12.097   5394.117 -  5419.323:    3.7840%  (       67)
00:07:12.097   5419.323 -  5444.529:    4.1879%  (       76)
00:07:12.097   5444.529 -  5469.735:    4.5227%  (       63)
00:07:12.097   5469.735 -  5494.942:    4.9426%  (       79)
00:07:12.097   5494.942 -  5520.148:    5.4369%  (       93)
00:07:12.097   5520.148 -  5545.354:    5.9736%  (      101)
00:07:12.097   5545.354 -  5570.560:    6.6539%  (      128)
00:07:12.097   5570.560 -  5595.766:    7.4139%  (      143)
00:07:12.097   5595.766 -  5620.972:    8.2642%  (      160)
00:07:12.097   5620.972 -  5646.178:    9.1252%  (      162)
00:07:12.097   5646.178 -  5671.385:   10.1084%  (      185)
00:07:12.097   5671.385 -  5696.591:   11.1023%  (      187)
00:07:12.097   5696.591 -  5721.797:   12.1439%  (      196)
00:07:12.097   5721.797 -  5747.003:   13.1218%  (      184)
00:07:12.097   5747.003 -  5772.209:   14.2432%  (      211)
00:07:12.097   5772.209 -  5797.415:   15.5134%  (      239)
00:07:12.097   5797.415 -  5822.622:   17.1609%  (      310)
00:07:12.097   5822.622 -  5847.828:   18.7500%  (      299)
00:07:12.097   5847.828 -  5873.034:   20.4135%  (      313)
00:07:12.097   5873.034 -  5898.240:   22.1301%  (      323)
00:07:12.097   5898.240 -  5923.446:   23.8255%  (      319)
00:07:12.097   5923.446 -  5948.652:   25.5952%  (      333)
00:07:12.097   5948.652 -  5973.858:   27.3172%  (      324)
00:07:12.097   5973.858 -  5999.065:   29.1348%  (      342)
00:07:12.097   5999.065 -  6024.271:   31.0321%  (      357)
00:07:12.097   6024.271 -  6049.477:   32.8603%  (      344)
00:07:12.097   6049.477 -  6074.683:   34.6832%  (      343)
00:07:12.097   6074.683 -  6099.889:   36.5221%  (      346)
00:07:12.097   6099.889 -  6125.095:   38.3557%  (      345)
00:07:12.097   6125.095 -  6150.302:   40.2158%  (      350)
00:07:12.097   6150.302 -  6175.508:   42.1609%  (      366)
00:07:12.097   6175.508 -  6200.714:   43.9413%  (      335)
00:07:12.097   6200.714 -  6225.920:   45.7589%  (      342)
00:07:12.097   6225.920 -  6251.126:   47.4915%  (      326)
00:07:12.097   6251.126 -  6276.332:   49.2560%  (      332)
00:07:12.097   6276.332 -  6301.538:   51.0364%  (      335)
00:07:12.097   6301.538 -  6326.745:   52.7795%  (      328)
00:07:12.097   6326.745 -  6351.951:   54.4058%  (      306)
00:07:12.097   6351.951 -  6377.157:   56.0108%  (      302)
00:07:12.097   6377.157 -  6402.363:   57.5361%  (      287)
00:07:12.097   6402.363 -  6427.569:   59.0136%  (      278)
00:07:12.097   6427.569 -  6452.775:   60.5070%  (      281)
00:07:12.097   6452.775 -  6503.188:   63.3663%  (      538)
00:07:12.097   6503.188 -  6553.600:   66.1511%  (      524)
00:07:12.097   6553.600 -  6604.012:   68.4736%  (      437)
00:07:12.097   6604.012 -  6654.425:   70.3072%  (      345)
00:07:12.097   6654.425 -  6704.837:   71.6518%  (      253)
00:07:12.097   6704.837 -  6755.249:   72.7360%  (      204)
00:07:12.097   6755.249 -  6805.662:   73.6554%  (      173)
00:07:12.097   6805.662 -  6856.074:   74.4420%  (      148)
00:07:12.097   6856.074 -  6906.486:   75.0691%  (      118)
00:07:12.097   6906.486 -  6956.898:   75.6378%  (      107)
00:07:12.097   6956.898 -  7007.311:   76.1586%  (       98)
00:07:12.097   7007.311 -  7057.723:   76.6528%  (       93)
00:07:12.097   7057.723 -  7108.135:   77.1790%  (       99)
00:07:12.097   7108.135 -  7158.548:   77.6626%  (       91)
00:07:12.097   7158.548 -  7208.960:   78.1091%  (       84)
00:07:12.097   7208.960 -  7259.372:   78.6033%  (       93)
00:07:12.097   7259.372 -  7309.785:   79.0869%  (       91)
00:07:12.097   7309.785 -  7360.197:   79.5334%  (       84)
00:07:12.097   7360.197 -  7410.609:   79.9745%  (       83)
00:07:12.097   7410.609 -  7461.022:   80.3731%  (       75)
00:07:12.097   7461.022 -  7511.434:   80.7876%  (       78)
00:07:12.097   7511.434 -  7561.846:   81.1650%  (       71)
00:07:12.097   7561.846 -  7612.258:   81.5476%  (       72)
00:07:12.097   7612.258 -  7662.671:   81.9568%  (       77)
00:07:12.097   7662.671 -  7713.083:   82.2651%  (       58)
00:07:12.097   7713.083 -  7763.495:   82.6796%  (       78)
00:07:12.097   7763.495 -  7813.908:   83.0570%  (       71)
00:07:12.097   7813.908 -  7864.320:   83.4396%  (       72)
00:07:12.097   7864.320 -  7914.732:   83.8116%  (       70)
00:07:12.097   7914.732 -  7965.145:   84.1784%  (       69)
00:07:12.097   7965.145 -  8015.557:   84.4866%  (       58)
00:07:12.097   8015.557 -  8065.969:   84.8002%  (       59)
00:07:12.097   8065.969 -  8116.382:   85.1403%  (       64)
00:07:12.097   8116.382 -  8166.794:   85.4698%  (       62)
00:07:12.097   8166.794 -  8217.206:   85.8153%  (       65)
00:07:12.097   8217.206 -  8267.618:   86.1607%  (       65)
00:07:12.097   8267.618 -  8318.031:   86.5593%  (       75)
00:07:12.097   8318.031 -  8368.443:   86.9313%  (       70)
00:07:12.097   8368.443 -  8418.855:   87.3140%  (       72)
00:07:12.097   8418.855 -  8469.268:   87.6966%  (       72)
00:07:12.097   8469.268 -  8519.680:   88.1431%  (       84)
00:07:12.097   8519.680 -  8570.092:   88.5204%  (       71)
00:07:12.097   8570.092 -  8620.505:   88.8924%  (       70)
00:07:12.097   8620.505 -  8670.917:   89.2538%  (       68)
00:07:12.097   8670.917 -  8721.329:   89.6259%  (       70)
00:07:12.097   8721.329 -  8771.742:   89.9713%  (       65)
00:07:12.097   8771.742 -  8822.154:   90.2795%  (       58)
00:07:12.097   8822.154 -  8872.566:   90.6675%  (       73)
00:07:12.097   8872.566 -  8922.978:   91.0183%  (       66)
00:07:12.097   8922.978 -  8973.391:   91.3372%  (       60)
00:07:12.097   8973.391 -  9023.803:   91.6720%  (       63)
00:07:12.097   9023.803 -  9074.215:   92.0281%  (       67)
00:07:12.097   9074.215 -  9124.628:   92.3469%  (       60)
00:07:12.097   9124.628 -  9175.040:   92.6605%  (       59)
00:07:12.097   9175.040 -  9225.452:   92.9369%  (       52)
00:07:12.097   9225.452 -  9275.865:   93.2026%  (       50)
00:07:12.097   9275.865 -  9326.277:   93.4364%  (       44)
00:07:12.097   9326.277 -  9376.689:   93.6756%  (       45)
00:07:12.097   9376.689 -  9427.102:   93.8776%  (       38)
00:07:12.097   9427.102 -  9477.514:   94.1327%  (       48)
00:07:12.097   9477.514 -  9527.926:   94.3718%  (       45)
00:07:12.097   9527.926 -  9578.338:   94.6216%  (       47)
00:07:12.097   9578.338 -  9628.751:   94.8554%  (       44)
00:07:12.097   9628.751 -  9679.163:   95.1052%  (       47)
00:07:12.097   9679.163 -  9729.575:   95.3125%  (       39)
00:07:12.097   9729.575 -  9779.988:   95.4879%  (       33)
00:07:12.097   9779.988 -  9830.400:   95.6633%  (       33)
00:07:12.097   9830.400 -  9880.812:   95.8599%  (       37)
00:07:12.097   9880.812 -  9931.225:   96.0300%  (       32)
00:07:12.097   9931.225 -  9981.637:   96.1947%  (       31)
00:07:12.097   9981.637 - 10032.049:   96.3435%  (       28)
00:07:12.097  10032.049 - 10082.462:   96.4764%  (       25)
00:07:12.097  10082.462 - 10132.874:   96.6093%  (       25)
00:07:12.097  10132.874 - 10183.286:   96.7262%  (       22)
00:07:12.097  10183.286 - 10233.698:   96.8856%  (       30)
00:07:12.097  10233.698 - 10284.111:   97.0238%  (       26)
00:07:12.097  10284.111 - 10334.523:   97.1832%  (       30)
00:07:12.097  10334.523 - 10384.935:   97.3321%  (       28)
00:07:12.097  10384.935 - 10435.348:   97.4702%  (       26)
00:07:12.098  10435.348 - 10485.760:   97.6190%  (       28)
00:07:12.098  10485.760 - 10536.172:   97.7519%  (       25)
00:07:12.098  10536.172 - 10586.585:   97.8848%  (       25)
00:07:12.098  10586.585 - 10636.997:   97.9911%  (       20)
00:07:12.098  10636.997 - 10687.409:   98.1080%  (       22)
00:07:12.098  10687.409 - 10737.822:   98.2302%  (       23)
00:07:12.098  10737.822 - 10788.234:   98.3418%  (       21)
00:07:12.098  10788.234 - 10838.646:   98.4428%  (       19)
00:07:12.098  10838.646 - 10889.058:   98.5332%  (       17)
00:07:12.098  10889.058 - 10939.471:   98.6395%  (       20)
00:07:12.098  10939.471 - 10989.883:   98.7351%  (       18)
00:07:12.098  10989.883 - 11040.295:   98.8308%  (       18)
00:07:12.098  11040.295 - 11090.708:   98.8999%  (       13)
00:07:12.098  11090.708 - 11141.120:   98.9530%  (       10)
00:07:12.098  11141.120 - 11191.532:   99.0009%  (        9)
00:07:12.098  11191.532 - 11241.945:   99.0381%  (        7)
00:07:12.098  11241.945 - 11292.357:   99.0540%  (        3)
00:07:12.098  11292.357 - 11342.769:   99.0753%  (        4)
00:07:12.098  11342.769 - 11393.182:   99.0965%  (        4)
00:07:12.098  11393.182 - 11443.594:   99.1178%  (        4)
00:07:12.098  11443.594 - 11494.006:   99.1390%  (        4)
00:07:12.098  11494.006 - 11544.418:   99.1603%  (        4)
00:07:12.098  11544.418 - 11594.831:   99.1762%  (        3)
00:07:12.098  11594.831 - 11645.243:   99.1975%  (        4)
00:07:12.098  11645.243 - 11695.655:   99.2188%  (        4)
00:07:12.098  11695.655 - 11746.068:   99.2347%  (        3)
00:07:12.098  11746.068 - 11796.480:   99.2560%  (        4)
00:07:12.098  11796.480 - 11846.892:   99.2719%  (        3)
00:07:12.098  11846.892 - 11897.305:   99.2932%  (        4)
00:07:12.098  11897.305 - 11947.717:   99.3091%  (        3)
00:07:12.098  11947.717 - 11998.129:   99.3197%  (        2)
00:07:12.098  15022.868 - 15123.692:   99.3250%  (        1)
00:07:12.098  15123.692 - 15224.517:   99.3463%  (        4)
00:07:12.098  15224.517 - 15325.342:   99.3676%  (        4)
00:07:12.098  15325.342 - 15426.166:   99.3888%  (        4)
00:07:12.098  15426.166 - 15526.991:   99.4101%  (        4)
00:07:12.098  15526.991 - 15627.815:   99.4366%  (        5)
00:07:12.098  15627.815 - 15728.640:   99.4579%  (        4)
00:07:12.098  15728.640 - 15829.465:   99.4792%  (        4)
00:07:12.098  15829.465 - 15930.289:   99.5004%  (        4)
00:07:12.098  15930.289 - 16031.114:   99.5217%  (        4)
00:07:12.098  16031.114 - 16131.938:   99.5429%  (        4)
00:07:12.098  16131.938 - 16232.763:   99.5642%  (        4)
00:07:12.098  16232.763 - 16333.588:   99.5855%  (        4)
00:07:12.098  16333.588 - 16434.412:   99.6120%  (        5)
00:07:12.098  16434.412 - 16535.237:   99.6333%  (        4)
00:07:12.098  16535.237 - 16636.062:   99.6545%  (        4)
00:07:12.098  16636.062 - 16736.886:   99.6599%  (        1)
00:07:12.098  19761.625 - 19862.449:   99.6811%  (        4)
00:07:12.098  19862.449 - 19963.274:   99.7077%  (        5)
00:07:12.098  19963.274 - 20064.098:   99.7290%  (        4)
00:07:12.098  20064.098 - 20164.923:   99.7502%  (        4)
00:07:12.098  20164.923 - 20265.748:   99.7768%  (        5)
00:07:12.098  20265.748 - 20366.572:   99.7980%  (        4)
00:07:12.098  20366.572 - 20467.397:   99.8193%  (        4)
00:07:12.098  20467.397 - 20568.222:   99.8406%  (        4)
00:07:12.098  20568.222 - 20669.046:   99.8618%  (        4)
00:07:12.098  20669.046 - 20769.871:   99.8831%  (        4)
00:07:12.098  20769.871 - 20870.695:   99.9043%  (        4)
00:07:12.098  20870.695 - 20971.520:   99.9256%  (        4)
00:07:12.098  20971.520 - 21072.345:   99.9522%  (        5)
00:07:12.098  21072.345 - 21173.169:   99.9734%  (        4)
00:07:12.098  21173.169 - 21273.994:   99.9947%  (        4)
00:07:12.098  21273.994 - 21374.818:  100.0000%  (        1)
00:07:12.098  
00:07:12.098   16:56:34 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:07:13.034  Initializing NVMe Controllers
00:07:13.034  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:13.034  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:07:13.034  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:07:13.034  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:07:13.034  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:13.034  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:07:13.034  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:07:13.034  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:07:13.034  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:07:13.034  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:07:13.034  Initialization complete. Launching workers.
00:07:13.034  ========================================================
00:07:13.034                                                                             Latency(us)
00:07:13.034  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:13.034  PCIE (0000:00:10.0) NSID 1 from core  0:   17510.38     205.20    7319.76    5522.72   30899.55
00:07:13.034  PCIE (0000:00:11.0) NSID 1 from core  0:   17510.38     205.20    7308.66    5701.36   29044.18
00:07:13.034  PCIE (0000:00:13.0) NSID 1 from core  0:   17510.38     205.20    7297.34    5679.73   27723.68
00:07:13.034  PCIE (0000:00:12.0) NSID 1 from core  0:   17510.38     205.20    7286.18    5604.63   25934.11
00:07:13.034  PCIE (0000:00:12.0) NSID 2 from core  0:   17510.38     205.20    7275.02    5777.26   24247.03
00:07:13.034  PCIE (0000:00:12.0) NSID 3 from core  0:   17574.29     205.95    7237.64    5503.74   18942.42
00:07:13.034  ========================================================
00:07:13.034  Total                                  :  105126.20    1231.95    7287.40    5503.74   30899.55
00:07:13.034  
00:07:13.034  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:13.034  =================================================================================
00:07:13.034    1.00000% :  5898.240us
00:07:13.034   10.00000% :  6326.745us
00:07:13.034   25.00000% :  6553.600us
00:07:13.034   50.00000% :  6956.898us
00:07:13.034   75.00000% :  7561.846us
00:07:13.034   90.00000% :  8570.092us
00:07:13.034   95.00000% :  9074.215us
00:07:13.034   98.00000% : 10183.286us
00:07:13.034   99.00000% : 11241.945us
00:07:13.034   99.50000% : 25206.154us
00:07:13.034   99.90000% : 30449.034us
00:07:13.034   99.99000% : 30852.332us
00:07:13.034   99.99900% : 31053.982us
00:07:13.034   99.99990% : 31053.982us
00:07:13.034   99.99999% : 31053.982us
00:07:13.034  
00:07:13.034  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:13.034  =================================================================================
00:07:13.034    1.00000% :  6074.683us
00:07:13.034   10.00000% :  6377.157us
00:07:13.034   25.00000% :  6604.012us
00:07:13.034   50.00000% :  6906.486us
00:07:13.034   75.00000% :  7561.846us
00:07:13.034   90.00000% :  8570.092us
00:07:13.034   95.00000% :  8922.978us
00:07:13.034   98.00000% : 10183.286us
00:07:13.034   99.00000% : 11241.945us
00:07:13.034   99.50000% : 23592.960us
00:07:13.034   99.90000% : 28634.191us
00:07:13.034   99.99000% : 29037.489us
00:07:13.034   99.99900% : 29239.138us
00:07:13.034   99.99990% : 29239.138us
00:07:13.034   99.99999% : 29239.138us
00:07:13.034  
00:07:13.034  Summary latency data for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:07:13.034  =================================================================================
00:07:13.034    1.00000% :  6074.683us
00:07:13.034   10.00000% :  6351.951us
00:07:13.034   25.00000% :  6553.600us
00:07:13.034   50.00000% :  6906.486us
00:07:13.034   75.00000% :  7511.434us
00:07:13.034   90.00000% :  8620.505us
00:07:13.034   95.00000% :  8872.566us
00:07:13.034   98.00000% : 10384.935us
00:07:13.034   99.00000% : 10939.471us
00:07:13.034   99.50000% : 22483.889us
00:07:13.034   99.90000% : 27424.295us
00:07:13.034   99.99000% : 27827.594us
00:07:13.034   99.99900% : 27827.594us
00:07:13.034   99.99990% : 27827.594us
00:07:13.034   99.99999% : 27827.594us
00:07:13.034  
00:07:13.034  Summary latency data for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:07:13.034  =================================================================================
00:07:13.034    1.00000% :  6074.683us
00:07:13.034   10.00000% :  6351.951us
00:07:13.034   25.00000% :  6604.012us
00:07:13.034   50.00000% :  6906.486us
00:07:13.034   75.00000% :  7511.434us
00:07:13.034   90.00000% :  8620.505us
00:07:13.034   95.00000% :  8872.566us
00:07:13.034   98.00000% : 10435.348us
00:07:13.034   99.00000% : 10838.646us
00:07:13.034   99.50000% : 20769.871us
00:07:13.034   99.90000% : 25508.628us
00:07:13.034   99.99000% : 26012.751us
00:07:13.034   99.99900% : 26012.751us
00:07:13.034   99.99990% : 26012.751us
00:07:13.035   99.99999% : 26012.751us
00:07:13.035  
00:07:13.035  Summary latency data for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:07:13.035  =================================================================================
00:07:13.035    1.00000% :  6049.477us
00:07:13.035   10.00000% :  6351.951us
00:07:13.035   25.00000% :  6604.012us
00:07:13.035   50.00000% :  6906.486us
00:07:13.035   75.00000% :  7511.434us
00:07:13.035   90.00000% :  8570.092us
00:07:13.035   95.00000% :  8872.566us
00:07:13.035   98.00000% : 10384.935us
00:07:13.035   99.00000% : 10989.883us
00:07:13.035   99.50000% : 19055.852us
00:07:13.035   99.90000% : 23895.434us
00:07:13.035   99.99000% : 24298.732us
00:07:13.035   99.99900% : 24298.732us
00:07:13.035   99.99990% : 24298.732us
00:07:13.035   99.99999% : 24298.732us
00:07:13.035  
00:07:13.035  Summary latency data for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:07:13.035  =================================================================================
00:07:13.035    1.00000% :  6099.889us
00:07:13.035   10.00000% :  6351.951us
00:07:13.035   25.00000% :  6604.012us
00:07:13.035   50.00000% :  6906.486us
00:07:13.035   75.00000% :  7561.846us
00:07:13.035   90.00000% :  8570.092us
00:07:13.035   95.00000% :  8922.978us
00:07:13.035   98.00000% : 10183.286us
00:07:13.035   99.00000% : 11090.708us
00:07:13.035   99.50000% : 13611.323us
00:07:13.035   99.90000% : 18551.729us
00:07:13.035   99.99000% : 18955.028us
00:07:13.035   99.99900% : 18955.028us
00:07:13.035   99.99990% : 18955.028us
00:07:13.035   99.99999% : 18955.028us
00:07:13.035  
00:07:13.035  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:13.035  ==============================================================================
00:07:13.035         Range in us     Cumulative    IO count
00:07:13.035   5520.148 -  5545.354:    0.0285%  (        5)
00:07:13.035   5545.354 -  5570.560:    0.0399%  (        2)
00:07:13.035   5570.560 -  5595.766:    0.0570%  (        3)
00:07:13.035   5595.766 -  5620.972:    0.0684%  (        2)
00:07:13.035   5620.972 -  5646.178:    0.1198%  (        9)
00:07:13.035   5646.178 -  5671.385:    0.1483%  (        5)
00:07:13.035   5671.385 -  5696.591:    0.1825%  (        6)
00:07:13.035   5696.591 -  5721.797:    0.2167%  (        6)
00:07:13.035   5721.797 -  5747.003:    0.2737%  (       10)
00:07:13.035   5747.003 -  5772.209:    0.3650%  (       16)
00:07:13.035   5772.209 -  5797.415:    0.4961%  (       23)
00:07:13.035   5797.415 -  5822.622:    0.5931%  (       17)
00:07:13.035   5822.622 -  5847.828:    0.7242%  (       23)
00:07:13.035   5847.828 -  5873.034:    0.8611%  (       24)
00:07:13.035   5873.034 -  5898.240:    1.0493%  (       33)
00:07:13.035   5898.240 -  5923.446:    1.2032%  (       27)
00:07:13.035   5923.446 -  5948.652:    1.5226%  (       56)
00:07:13.035   5948.652 -  5973.858:    1.8419%  (       56)
00:07:13.035   5973.858 -  5999.065:    2.2126%  (       65)
00:07:13.035   5999.065 -  6024.271:    2.4749%  (       46)
00:07:13.035   6024.271 -  6049.477:    2.7657%  (       51)
00:07:13.035   6049.477 -  6074.683:    3.1649%  (       70)
00:07:13.035   6074.683 -  6099.889:    3.6439%  (       84)
00:07:13.035   6099.889 -  6125.095:    4.2199%  (      101)
00:07:13.035   6125.095 -  6150.302:    4.8871%  (      117)
00:07:13.035   6150.302 -  6175.508:    5.4630%  (      101)
00:07:13.035   6175.508 -  6200.714:    6.1816%  (      126)
00:07:13.035   6200.714 -  6225.920:    6.8374%  (      115)
00:07:13.035   6225.920 -  6251.126:    7.6870%  (      149)
00:07:13.035   6251.126 -  6276.332:    8.7363%  (      184)
00:07:13.035   6276.332 -  6301.538:    9.7457%  (      177)
00:07:13.035   6301.538 -  6326.745:   10.9147%  (      205)
00:07:13.035   6326.745 -  6351.951:   12.2149%  (      228)
00:07:13.035   6351.951 -  6377.157:   13.6063%  (      244)
00:07:13.035   6377.157 -  6402.363:   14.9692%  (      239)
00:07:13.035   6402.363 -  6427.569:   16.6229%  (      290)
00:07:13.035   6427.569 -  6452.775:   18.1740%  (      272)
00:07:13.035   6452.775 -  6503.188:   21.6526%  (      610)
00:07:13.035   6503.188 -  6553.600:   25.3307%  (      645)
00:07:13.035   6553.600 -  6604.012:   28.6724%  (      586)
00:07:13.035   6604.012 -  6654.425:   32.3905%  (      652)
00:07:13.035   6654.425 -  6704.837:   35.9432%  (      623)
00:07:13.035   6704.837 -  6755.249:   39.4332%  (      612)
00:07:13.035   6755.249 -  6805.662:   43.2140%  (      663)
00:07:13.035   6805.662 -  6856.074:   46.4872%  (      574)
00:07:13.035   6856.074 -  6906.486:   49.6921%  (      562)
00:07:13.035   6906.486 -  6956.898:   52.9197%  (      566)
00:07:13.035   6956.898 -  7007.311:   56.2158%  (      578)
00:07:13.035   7007.311 -  7057.723:   59.1697%  (      518)
00:07:13.035   7057.723 -  7108.135:   61.6845%  (      441)
00:07:13.035   7108.135 -  7158.548:   63.7774%  (      367)
00:07:13.035   7158.548 -  7208.960:   65.6478%  (      328)
00:07:13.035   7208.960 -  7259.372:   67.5240%  (      329)
00:07:13.035   7259.372 -  7309.785:   69.3773%  (      325)
00:07:13.035   7309.785 -  7360.197:   70.7744%  (      245)
00:07:13.035   7360.197 -  7410.609:   71.9891%  (      213)
00:07:13.035   7410.609 -  7461.022:   73.1752%  (      208)
00:07:13.035   7461.022 -  7511.434:   74.1902%  (      178)
00:07:13.035   7511.434 -  7561.846:   75.2224%  (      181)
00:07:13.035   7561.846 -  7612.258:   76.0036%  (      137)
00:07:13.035   7612.258 -  7662.671:   76.7507%  (      131)
00:07:13.035   7662.671 -  7713.083:   77.3609%  (      107)
00:07:13.035   7713.083 -  7763.495:   77.9026%  (       95)
00:07:13.035   7763.495 -  7813.908:   78.7523%  (      149)
00:07:13.035   7813.908 -  7864.320:   79.3282%  (      101)
00:07:13.035   7864.320 -  7914.732:   79.9783%  (      114)
00:07:13.035   7914.732 -  7965.145:   80.6455%  (      117)
00:07:13.035   7965.145 -  8015.557:   81.4211%  (      136)
00:07:13.035   8015.557 -  8065.969:   82.1909%  (      135)
00:07:13.035   8065.969 -  8116.382:   83.2231%  (      181)
00:07:13.035   8116.382 -  8166.794:   84.3237%  (      193)
00:07:13.035   8166.794 -  8217.206:   85.1163%  (      139)
00:07:13.035   8217.206 -  8267.618:   85.9489%  (      146)
00:07:13.035   8267.618 -  8318.031:   86.7929%  (      148)
00:07:13.035   8318.031 -  8368.443:   87.4316%  (      112)
00:07:13.035   8368.443 -  8418.855:   88.1330%  (      123)
00:07:13.035   8418.855 -  8469.268:   88.8458%  (      125)
00:07:13.035   8469.268 -  8519.680:   89.5301%  (      120)
00:07:13.035   8519.680 -  8570.092:   90.1916%  (      116)
00:07:13.035   8570.092 -  8620.505:   90.7219%  (       93)
00:07:13.035   8620.505 -  8670.917:   91.5089%  (      138)
00:07:13.035   8670.917 -  8721.329:   91.9936%  (       85)
00:07:13.035   8721.329 -  8771.742:   92.4897%  (       87)
00:07:13.035   8771.742 -  8822.154:   92.9802%  (       86)
00:07:13.035   8822.154 -  8872.566:   93.4078%  (       75)
00:07:13.035   8872.566 -  8922.978:   93.8355%  (       75)
00:07:13.035   8922.978 -  8973.391:   94.2176%  (       67)
00:07:13.035   8973.391 -  9023.803:   94.6567%  (       77)
00:07:13.035   9023.803 -  9074.215:   95.0559%  (       70)
00:07:13.035   9074.215 -  9124.628:   95.4380%  (       67)
00:07:13.035   9124.628 -  9175.040:   95.8029%  (       64)
00:07:13.035   9175.040 -  9225.452:   96.0253%  (       39)
00:07:13.035   9225.452 -  9275.865:   96.2078%  (       32)
00:07:13.035   9275.865 -  9326.277:   96.3675%  (       28)
00:07:13.035   9326.277 -  9376.689:   96.5043%  (       24)
00:07:13.035   9376.689 -  9427.102:   96.6241%  (       21)
00:07:13.035   9427.102 -  9477.514:   96.7210%  (       17)
00:07:13.035   9477.514 -  9527.926:   96.8237%  (       18)
00:07:13.035   9527.926 -  9578.338:   96.9092%  (       15)
00:07:13.035   9578.338 -  9628.751:   97.0176%  (       19)
00:07:13.035   9628.751 -  9679.163:   97.1373%  (       21)
00:07:13.035   9679.163 -  9729.575:   97.2172%  (       14)
00:07:13.035   9729.575 -  9779.988:   97.2742%  (       10)
00:07:13.035   9779.988 -  9830.400:   97.4453%  (       30)
00:07:13.035   9830.400 -  9880.812:   97.5194%  (       13)
00:07:13.035   9880.812 -  9931.225:   97.5935%  (       13)
00:07:13.035   9931.225 -  9981.637:   97.6791%  (       15)
00:07:13.035   9981.637 - 10032.049:   97.7304%  (        9)
00:07:13.035  10032.049 - 10082.462:   97.8501%  (       21)
00:07:13.035  10082.462 - 10132.874:   97.9471%  (       17)
00:07:13.035  10132.874 - 10183.286:   98.0212%  (       13)
00:07:13.035  10183.286 - 10233.698:   98.1125%  (       16)
00:07:13.035  10233.698 - 10284.111:   98.1809%  (       12)
00:07:13.035  10284.111 - 10334.523:   98.2550%  (       13)
00:07:13.035  10334.523 - 10384.935:   98.3177%  (       11)
00:07:13.035  10384.935 - 10435.348:   98.3862%  (       12)
00:07:13.035  10435.348 - 10485.760:   98.4261%  (        7)
00:07:13.035  10485.760 - 10536.172:   98.4831%  (       10)
00:07:13.035  10536.172 - 10586.585:   98.5287%  (        8)
00:07:13.035  10586.585 - 10636.997:   98.5858%  (       10)
00:07:13.035  10636.997 - 10687.409:   98.6542%  (       12)
00:07:13.035  10687.409 - 10737.822:   98.6998%  (        8)
00:07:13.035  10737.822 - 10788.234:   98.7283%  (        5)
00:07:13.035  10788.234 - 10838.646:   98.7454%  (        3)
00:07:13.035  10838.646 - 10889.058:   98.7568%  (        2)
00:07:13.036  10889.058 - 10939.471:   98.7625%  (        1)
00:07:13.036  10939.471 - 10989.883:   98.7740%  (        2)
00:07:13.036  10989.883 - 11040.295:   98.7797%  (        1)
00:07:13.036  11040.295 - 11090.708:   98.7968%  (        3)
00:07:13.036  11090.708 - 11141.120:   98.8595%  (       11)
00:07:13.036  11141.120 - 11191.532:   98.9450%  (       15)
00:07:13.036  11191.532 - 11241.945:   99.0021%  (       10)
00:07:13.036  11241.945 - 11292.357:   99.0363%  (        6)
00:07:13.036  11292.357 - 11342.769:   99.0705%  (        6)
00:07:13.036  11342.769 - 11393.182:   99.0933%  (        4)
00:07:13.036  11393.182 - 11443.594:   99.1275%  (        6)
00:07:13.036  11443.594 - 11494.006:   99.1617%  (        6)
00:07:13.036  11494.006 - 11544.418:   99.1959%  (        6)
00:07:13.036  11544.418 - 11594.831:   99.2245%  (        5)
00:07:13.036  11594.831 - 11645.243:   99.2416%  (        3)
00:07:13.036  11645.243 - 11695.655:   99.2644%  (        4)
00:07:13.036  11746.068 - 11796.480:   99.2701%  (        1)
00:07:13.036  24500.382 - 24601.206:   99.2929%  (        4)
00:07:13.036  24601.206 - 24702.031:   99.3214%  (        5)
00:07:13.036  24702.031 - 24802.855:   99.3898%  (       12)
00:07:13.036  24802.855 - 24903.680:   99.4297%  (        7)
00:07:13.036  24903.680 - 25004.505:   99.4811%  (        9)
00:07:13.036  25004.505 - 25105.329:   99.4982%  (        3)
00:07:13.036  25105.329 - 25206.154:   99.5153%  (        3)
00:07:13.036  25206.154 - 25306.978:   99.5210%  (        1)
00:07:13.036  25306.978 - 25407.803:   99.5438%  (        4)
00:07:13.036  25508.628 - 25609.452:   99.5609%  (        3)
00:07:13.036  25609.452 - 25710.277:   99.5894%  (        5)
00:07:13.036  25710.277 - 25811.102:   99.6065%  (        3)
00:07:13.036  25811.102 - 26012.751:   99.6350%  (        5)
00:07:13.036  29037.489 - 29239.138:   99.6521%  (        3)
00:07:13.036  29239.138 - 29440.788:   99.6921%  (        7)
00:07:13.036  29440.788 - 29642.437:   99.7377%  (        8)
00:07:13.036  29642.437 - 29844.086:   99.7833%  (        8)
00:07:13.036  29844.086 - 30045.735:   99.8232%  (        7)
00:07:13.036  30045.735 - 30247.385:   99.8631%  (        7)
00:07:13.036  30247.385 - 30449.034:   99.9088%  (        8)
00:07:13.036  30449.034 - 30650.683:   99.9544%  (        8)
00:07:13.036  30650.683 - 30852.332:   99.9943%  (        7)
00:07:13.036  30852.332 - 31053.982:  100.0000%  (        1)
00:07:13.036  
00:07:13.036  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:07:13.036  ==============================================================================
00:07:13.036         Range in us     Cumulative    IO count
00:07:13.036   5696.591 -  5721.797:    0.0057%  (        1)
00:07:13.036   5797.415 -  5822.622:    0.0114%  (        1)
00:07:13.036   5822.622 -  5847.828:    0.0228%  (        2)
00:07:13.036   5847.828 -  5873.034:    0.0342%  (        2)
00:07:13.036   5873.034 -  5898.240:    0.0741%  (        7)
00:07:13.036   5898.240 -  5923.446:    0.2053%  (       23)
00:07:13.036   5923.446 -  5948.652:    0.3650%  (       28)
00:07:13.036   5948.652 -  5973.858:    0.5075%  (       25)
00:07:13.036   5973.858 -  5999.065:    0.5817%  (       13)
00:07:13.036   5999.065 -  6024.271:    0.6729%  (       16)
00:07:13.036   6024.271 -  6049.477:    0.8896%  (       38)
00:07:13.036   6049.477 -  6074.683:    1.1291%  (       42)
00:07:13.036   6074.683 -  6099.889:    1.4713%  (       60)
00:07:13.036   6099.889 -  6125.095:    1.9902%  (       91)
00:07:13.036   6125.095 -  6150.302:    2.2183%  (       40)
00:07:13.036   6150.302 -  6175.508:    3.0281%  (      142)
00:07:13.036   6175.508 -  6200.714:    3.9804%  (      167)
00:07:13.036   6200.714 -  6225.920:    4.8586%  (      154)
00:07:13.036   6225.920 -  6251.126:    5.4345%  (      101)
00:07:13.036   6251.126 -  6276.332:    6.1930%  (      133)
00:07:13.036   6276.332 -  6301.538:    7.3449%  (      202)
00:07:13.036   6301.538 -  6326.745:    8.5766%  (      216)
00:07:13.036   6326.745 -  6351.951:    9.9510%  (      241)
00:07:13.036   6351.951 -  6377.157:   12.1464%  (      385)
00:07:13.036   6377.157 -  6402.363:   13.9713%  (      320)
00:07:13.036   6402.363 -  6427.569:   15.2543%  (      225)
00:07:13.036   6427.569 -  6452.775:   16.5944%  (      235)
00:07:13.036   6452.775 -  6503.188:   20.3011%  (      650)
00:07:13.036   6503.188 -  6553.600:   24.2359%  (      690)
00:07:13.036   6553.600 -  6604.012:   28.1991%  (      695)
00:07:13.036   6604.012 -  6654.425:   32.7669%  (      801)
00:07:13.036   6654.425 -  6704.837:   37.1578%  (      770)
00:07:13.036   6704.837 -  6755.249:   41.5659%  (      773)
00:07:13.036   6755.249 -  6805.662:   45.0958%  (      619)
00:07:13.036   6805.662 -  6856.074:   48.3349%  (      568)
00:07:13.036   6856.074 -  6906.486:   51.1006%  (      485)
00:07:13.036   6906.486 -  6956.898:   53.8207%  (      477)
00:07:13.036   6956.898 -  7007.311:   57.0712%  (      570)
00:07:13.036   7007.311 -  7057.723:   59.9624%  (      507)
00:07:13.036   7057.723 -  7108.135:   62.0837%  (      372)
00:07:13.036   7108.135 -  7158.548:   63.9998%  (      336)
00:07:13.036   7158.548 -  7208.960:   66.7256%  (      478)
00:07:13.036   7208.960 -  7259.372:   68.6873%  (      344)
00:07:13.036   7259.372 -  7309.785:   70.5805%  (      332)
00:07:13.036   7309.785 -  7360.197:   72.2856%  (      299)
00:07:13.036   7360.197 -  7410.609:   73.2322%  (      166)
00:07:13.036   7410.609 -  7461.022:   74.0249%  (      139)
00:07:13.036   7461.022 -  7511.434:   74.8917%  (      152)
00:07:13.036   7511.434 -  7561.846:   75.8155%  (      162)
00:07:13.036   7561.846 -  7612.258:   76.4770%  (      116)
00:07:13.036   7612.258 -  7662.671:   77.3095%  (      146)
00:07:13.036   7662.671 -  7713.083:   77.8741%  (       99)
00:07:13.036   7713.083 -  7763.495:   78.5926%  (      126)
00:07:13.036   7763.495 -  7813.908:   79.0317%  (       77)
00:07:13.036   7813.908 -  7864.320:   79.4423%  (       72)
00:07:13.036   7864.320 -  7914.732:   79.8358%  (       69)
00:07:13.036   7914.732 -  7965.145:   80.2920%  (       80)
00:07:13.036   7965.145 -  8015.557:   80.6626%  (       65)
00:07:13.036   8015.557 -  8065.969:   81.1474%  (       85)
00:07:13.036   8065.969 -  8116.382:   81.5465%  (       70)
00:07:13.036   8116.382 -  8166.794:   82.2993%  (      132)
00:07:13.036   8166.794 -  8217.206:   83.1946%  (      157)
00:07:13.036   8217.206 -  8267.618:   83.9758%  (      137)
00:07:13.036   8267.618 -  8318.031:   84.8654%  (      156)
00:07:13.036   8318.031 -  8368.443:   85.7949%  (      163)
00:07:13.036   8368.443 -  8418.855:   87.1065%  (      230)
00:07:13.036   8418.855 -  8469.268:   88.4865%  (      242)
00:07:13.036   8469.268 -  8519.680:   89.6556%  (      205)
00:07:13.036   8519.680 -  8570.092:   90.4881%  (      146)
00:07:13.036   8570.092 -  8620.505:   91.1610%  (      118)
00:07:13.036   8620.505 -  8670.917:   91.7883%  (      110)
00:07:13.036   8670.917 -  8721.329:   92.4099%  (      109)
00:07:13.036   8721.329 -  8771.742:   93.1170%  (      124)
00:07:13.036   8771.742 -  8822.154:   93.9040%  (      138)
00:07:13.036   8822.154 -  8872.566:   94.5255%  (      109)
00:07:13.036   8872.566 -  8922.978:   95.0673%  (       95)
00:07:13.036   8922.978 -  8973.391:   95.9113%  (      148)
00:07:13.036   8973.391 -  9023.803:   96.0995%  (       33)
00:07:13.036   9023.803 -  9074.215:   96.2762%  (       31)
00:07:13.036   9074.215 -  9124.628:   96.4359%  (       28)
00:07:13.036   9124.628 -  9175.040:   96.5271%  (       16)
00:07:13.036   9175.040 -  9225.452:   96.5899%  (       11)
00:07:13.036   9225.452 -  9275.865:   96.6526%  (       11)
00:07:13.036   9275.865 -  9326.277:   96.7324%  (       14)
00:07:13.036   9326.277 -  9376.689:   96.8009%  (       12)
00:07:13.036   9376.689 -  9427.102:   96.8750%  (       13)
00:07:13.036   9427.102 -  9477.514:   96.9548%  (       14)
00:07:13.036   9477.514 -  9527.926:   97.0917%  (       24)
00:07:13.036   9527.926 -  9578.338:   97.1373%  (        8)
00:07:13.036   9578.338 -  9628.751:   97.1715%  (        6)
00:07:13.036   9628.751 -  9679.163:   97.2000%  (        5)
00:07:13.036   9679.163 -  9729.575:   97.2229%  (        4)
00:07:13.036   9729.575 -  9779.988:   97.2514%  (        5)
00:07:13.036   9779.988 -  9830.400:   97.2799%  (        5)
00:07:13.036   9830.400 -  9880.812:   97.3711%  (       16)
00:07:13.036   9880.812 -  9931.225:   97.4624%  (       16)
00:07:13.036   9931.225 -  9981.637:   97.5935%  (       23)
00:07:13.036   9981.637 - 10032.049:   97.6848%  (       16)
00:07:13.036  10032.049 - 10082.462:   97.7760%  (       16)
00:07:13.036  10082.462 - 10132.874:   97.8844%  (       19)
00:07:13.036  10132.874 - 10183.286:   98.0212%  (       24)
00:07:13.036  10183.286 - 10233.698:   98.1353%  (       20)
00:07:13.036  10233.698 - 10284.111:   98.2322%  (       17)
00:07:13.036  10284.111 - 10334.523:   98.3634%  (       23)
00:07:13.036  10334.523 - 10384.935:   98.4204%  (       10)
00:07:13.036  10384.935 - 10435.348:   98.4888%  (       12)
00:07:13.036  10435.348 - 10485.760:   98.5516%  (       11)
00:07:13.036  10485.760 - 10536.172:   98.6029%  (        9)
00:07:13.036  10536.172 - 10586.585:   98.6599%  (       10)
00:07:13.036  10586.585 - 10636.997:   98.7055%  (        8)
00:07:13.036  10636.997 - 10687.409:   98.7397%  (        6)
00:07:13.036  10687.409 - 10737.822:   98.7740%  (        6)
00:07:13.036  10737.822 - 10788.234:   98.7968%  (        4)
00:07:13.036  10788.234 - 10838.646:   98.8139%  (        3)
00:07:13.036  10838.646 - 10889.058:   98.8310%  (        3)
00:07:13.036  10889.058 - 10939.471:   98.8481%  (        3)
00:07:13.037  10939.471 - 10989.883:   98.8652%  (        3)
00:07:13.037  10989.883 - 11040.295:   98.9051%  (        7)
00:07:13.037  11040.295 - 11090.708:   98.9507%  (        8)
00:07:13.037  11090.708 - 11141.120:   98.9678%  (        3)
00:07:13.037  11141.120 - 11191.532:   98.9849%  (        3)
00:07:13.037  11191.532 - 11241.945:   99.0021%  (        3)
00:07:13.037  11241.945 - 11292.357:   99.0192%  (        3)
00:07:13.037  11292.357 - 11342.769:   99.0363%  (        3)
00:07:13.037  11342.769 - 11393.182:   99.0534%  (        3)
00:07:13.037  11393.182 - 11443.594:   99.0762%  (        4)
00:07:13.037  11443.594 - 11494.006:   99.0933%  (        3)
00:07:13.037  11494.006 - 11544.418:   99.1104%  (        3)
00:07:13.037  11544.418 - 11594.831:   99.1332%  (        4)
00:07:13.037  11594.831 - 11645.243:   99.1503%  (        3)
00:07:13.037  11645.243 - 11695.655:   99.1731%  (        4)
00:07:13.037  11695.655 - 11746.068:   99.1902%  (        3)
00:07:13.037  11746.068 - 11796.480:   99.2130%  (        4)
00:07:13.037  11796.480 - 11846.892:   99.2302%  (        3)
00:07:13.037  11846.892 - 11897.305:   99.2473%  (        3)
00:07:13.037  11897.305 - 11947.717:   99.2701%  (        4)
00:07:13.037  22584.714 - 22685.538:   99.2986%  (        5)
00:07:13.037  22685.538 - 22786.363:   99.3214%  (        4)
00:07:13.037  22786.363 - 22887.188:   99.3442%  (        4)
00:07:13.037  22887.188 - 22988.012:   99.3670%  (        4)
00:07:13.037  22988.012 - 23088.837:   99.3898%  (        4)
00:07:13.037  23088.837 - 23189.662:   99.4126%  (        4)
00:07:13.037  23189.662 - 23290.486:   99.4411%  (        5)
00:07:13.037  23290.486 - 23391.311:   99.4640%  (        4)
00:07:13.037  23391.311 - 23492.135:   99.4868%  (        4)
00:07:13.037  23492.135 - 23592.960:   99.5096%  (        4)
00:07:13.037  23592.960 - 23693.785:   99.5324%  (        4)
00:07:13.037  23693.785 - 23794.609:   99.5552%  (        4)
00:07:13.037  23794.609 - 23895.434:   99.5780%  (        4)
00:07:13.037  23895.434 - 23996.258:   99.6008%  (        4)
00:07:13.037  23996.258 - 24097.083:   99.6236%  (        4)
00:07:13.037  24097.083 - 24197.908:   99.6350%  (        2)
00:07:13.037  27424.295 - 27625.945:   99.6750%  (        7)
00:07:13.037  27625.945 - 27827.594:   99.7263%  (        9)
00:07:13.037  27827.594 - 28029.243:   99.7605%  (        6)
00:07:13.037  28029.243 - 28230.892:   99.8118%  (        9)
00:07:13.037  28230.892 - 28432.542:   99.8574%  (        8)
00:07:13.037  28432.542 - 28634.191:   99.9031%  (        8)
00:07:13.037  28634.191 - 28835.840:   99.9487%  (        8)
00:07:13.037  28835.840 - 29037.489:   99.9943%  (        8)
00:07:13.037  29037.489 - 29239.138:  100.0000%  (        1)
00:07:13.037  
00:07:13.037  Latency histogram for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:07:13.037  ==============================================================================
00:07:13.037         Range in us     Cumulative    IO count
00:07:13.037   5671.385 -  5696.591:    0.0114%  (        2)
00:07:13.037   5772.209 -  5797.415:    0.0228%  (        2)
00:07:13.037   5797.415 -  5822.622:    0.0285%  (        1)
00:07:13.037   5822.622 -  5847.828:    0.0342%  (        1)
00:07:13.037   5873.034 -  5898.240:    0.0399%  (        1)
00:07:13.037   5898.240 -  5923.446:    0.0798%  (        7)
00:07:13.037   5923.446 -  5948.652:    0.1198%  (        7)
00:07:13.037   5948.652 -  5973.858:    0.2167%  (       17)
00:07:13.037   5973.858 -  5999.065:    0.3136%  (       17)
00:07:13.037   5999.065 -  6024.271:    0.4505%  (       24)
00:07:13.037   6024.271 -  6049.477:    0.7185%  (       47)
00:07:13.037   6049.477 -  6074.683:    1.0550%  (       59)
00:07:13.037   6074.683 -  6099.889:    1.4142%  (       63)
00:07:13.037   6099.889 -  6125.095:    2.0016%  (      103)
00:07:13.037   6125.095 -  6150.302:    2.7429%  (      130)
00:07:13.037   6150.302 -  6175.508:    3.6040%  (      151)
00:07:13.037   6175.508 -  6200.714:    4.1629%  (       98)
00:07:13.037   6200.714 -  6225.920:    5.0411%  (      154)
00:07:13.037   6225.920 -  6251.126:    5.8508%  (      142)
00:07:13.037   6251.126 -  6276.332:    6.6036%  (      132)
00:07:13.037   6276.332 -  6301.538:    7.3734%  (      135)
00:07:13.037   6301.538 -  6326.745:    8.2801%  (      159)
00:07:13.037   6326.745 -  6351.951:   10.0707%  (      314)
00:07:13.037   6351.951 -  6377.157:   11.8442%  (      311)
00:07:13.037   6377.157 -  6402.363:   13.7603%  (      336)
00:07:13.037   6402.363 -  6427.569:   15.5395%  (      312)
00:07:13.037   6427.569 -  6452.775:   17.7292%  (      384)
00:07:13.037   6452.775 -  6503.188:   21.1622%  (      602)
00:07:13.037   6503.188 -  6553.600:   25.1597%  (      701)
00:07:13.037   6553.600 -  6604.012:   29.1458%  (      699)
00:07:13.037   6604.012 -  6654.425:   32.3962%  (      570)
00:07:13.037   6654.425 -  6704.837:   35.3444%  (      517)
00:07:13.037   6704.837 -  6755.249:   38.9085%  (      625)
00:07:13.037   6755.249 -  6805.662:   43.0771%  (      731)
00:07:13.037   6805.662 -  6856.074:   46.4701%  (      595)
00:07:13.037   6856.074 -  6906.486:   50.7128%  (      744)
00:07:13.037   6906.486 -  6956.898:   54.9954%  (      751)
00:07:13.037   6956.898 -  7007.311:   57.7270%  (      479)
00:07:13.037   7007.311 -  7057.723:   60.1848%  (      431)
00:07:13.037   7057.723 -  7108.135:   63.2641%  (      540)
00:07:13.037   7108.135 -  7158.548:   65.2486%  (      348)
00:07:13.037   7158.548 -  7208.960:   67.6095%  (      414)
00:07:13.037   7208.960 -  7259.372:   69.3545%  (      306)
00:07:13.037   7259.372 -  7309.785:   70.8485%  (      262)
00:07:13.037   7309.785 -  7360.197:   71.9776%  (      198)
00:07:13.037   7360.197 -  7410.609:   73.1182%  (      200)
00:07:13.037   7410.609 -  7461.022:   74.2644%  (      201)
00:07:13.037   7461.022 -  7511.434:   75.2338%  (      170)
00:07:13.037   7511.434 -  7561.846:   75.8953%  (      116)
00:07:13.037   7561.846 -  7612.258:   76.6937%  (      140)
00:07:13.037   7612.258 -  7662.671:   77.3951%  (      123)
00:07:13.037   7662.671 -  7713.083:   77.9482%  (       97)
00:07:13.037   7713.083 -  7763.495:   78.4386%  (       86)
00:07:13.037   7763.495 -  7813.908:   79.2826%  (      148)
00:07:13.037   7813.908 -  7864.320:   79.6932%  (       72)
00:07:13.037   7864.320 -  7914.732:   80.2178%  (       92)
00:07:13.037   7914.732 -  7965.145:   80.6626%  (       78)
00:07:13.037   7965.145 -  8015.557:   81.1131%  (       79)
00:07:13.037   8015.557 -  8065.969:   81.3869%  (       48)
00:07:13.037   8065.969 -  8116.382:   81.7062%  (       56)
00:07:13.037   8116.382 -  8166.794:   82.1054%  (       70)
00:07:13.037   8166.794 -  8217.206:   82.7441%  (      112)
00:07:13.037   8217.206 -  8267.618:   83.5082%  (      134)
00:07:13.037   8267.618 -  8318.031:   84.2838%  (      136)
00:07:13.037   8318.031 -  8368.443:   85.3558%  (      188)
00:07:13.037   8368.443 -  8418.855:   86.3310%  (      171)
00:07:13.037   8418.855 -  8469.268:   87.6996%  (      240)
00:07:13.037   8469.268 -  8519.680:   89.2678%  (      275)
00:07:13.037   8519.680 -  8570.092:   89.9578%  (      121)
00:07:13.037   8570.092 -  8620.505:   90.7676%  (      142)
00:07:13.037   8620.505 -  8670.917:   91.5602%  (      139)
00:07:13.037   8670.917 -  8721.329:   92.3928%  (      146)
00:07:13.037   8721.329 -  8771.742:   93.2710%  (      154)
00:07:13.037   8771.742 -  8822.154:   94.1549%  (      155)
00:07:13.037   8822.154 -  8872.566:   95.0445%  (      156)
00:07:13.037   8872.566 -  8922.978:   95.6432%  (      105)
00:07:13.037   8922.978 -  8973.391:   96.0196%  (       66)
00:07:13.037   8973.391 -  9023.803:   96.2933%  (       48)
00:07:13.037   9023.803 -  9074.215:   96.5671%  (       48)
00:07:13.037   9074.215 -  9124.628:   96.7495%  (       32)
00:07:13.037   9124.628 -  9175.040:   96.9035%  (       27)
00:07:13.037   9175.040 -  9225.452:   97.0119%  (       19)
00:07:13.037   9225.452 -  9275.865:   97.1088%  (       17)
00:07:13.037   9275.865 -  9326.277:   97.1487%  (        7)
00:07:13.037   9326.277 -  9376.689:   97.1715%  (        4)
00:07:13.037   9376.689 -  9427.102:   97.2000%  (        5)
00:07:13.037   9427.102 -  9477.514:   97.2343%  (        6)
00:07:13.037   9477.514 -  9527.926:   97.2571%  (        4)
00:07:13.037   9527.926 -  9578.338:   97.2913%  (        6)
00:07:13.037   9578.338 -  9628.751:   97.3369%  (        8)
00:07:13.037   9628.751 -  9679.163:   97.3825%  (        8)
00:07:13.037   9679.163 -  9729.575:   97.4053%  (        4)
00:07:13.037   9729.575 -  9779.988:   97.4453%  (        7)
00:07:13.037   9779.988 -  9830.400:   97.4738%  (        5)
00:07:13.037   9830.400 -  9880.812:   97.5137%  (        7)
00:07:13.037   9880.812 -  9931.225:   97.5479%  (        6)
00:07:13.037   9931.225 -  9981.637:   97.5593%  (        2)
00:07:13.037   9981.637 - 10032.049:   97.5821%  (        4)
00:07:13.037  10032.049 - 10082.462:   97.6391%  (       10)
00:07:13.037  10082.462 - 10132.874:   97.7190%  (       14)
00:07:13.037  10132.874 - 10183.286:   97.7817%  (       11)
00:07:13.037  10183.286 - 10233.698:   97.8330%  (        9)
00:07:13.037  10233.698 - 10284.111:   97.9015%  (       12)
00:07:13.037  10284.111 - 10334.523:   97.9927%  (       16)
00:07:13.037  10334.523 - 10384.935:   98.1068%  (       20)
00:07:13.037  10384.935 - 10435.348:   98.2037%  (       17)
00:07:13.037  10435.348 - 10485.760:   98.3292%  (       22)
00:07:13.037  10485.760 - 10536.172:   98.4090%  (       14)
00:07:13.037  10536.172 - 10586.585:   98.5002%  (       16)
00:07:13.037  10586.585 - 10636.997:   98.5630%  (       11)
00:07:13.037  10636.997 - 10687.409:   98.6884%  (       22)
00:07:13.038  10687.409 - 10737.822:   98.8424%  (       27)
00:07:13.038  10737.822 - 10788.234:   98.8937%  (        9)
00:07:13.038  10788.234 - 10838.646:   98.9279%  (        6)
00:07:13.038  10838.646 - 10889.058:   98.9678%  (        7)
00:07:13.038  10889.058 - 10939.471:   99.0078%  (        7)
00:07:13.038  10939.471 - 10989.883:   99.0534%  (        8)
00:07:13.038  10989.883 - 11040.295:   99.1047%  (        9)
00:07:13.038  11040.295 - 11090.708:   99.1275%  (        4)
00:07:13.038  11090.708 - 11141.120:   99.1617%  (        6)
00:07:13.038  11141.120 - 11191.532:   99.1845%  (        4)
00:07:13.038  11191.532 - 11241.945:   99.1959%  (        2)
00:07:13.038  11241.945 - 11292.357:   99.2130%  (        3)
00:07:13.038  11292.357 - 11342.769:   99.2245%  (        2)
00:07:13.038  11342.769 - 11393.182:   99.2416%  (        3)
00:07:13.038  11393.182 - 11443.594:   99.2530%  (        2)
00:07:13.038  11443.594 - 11494.006:   99.2701%  (        3)
00:07:13.038  21475.643 - 21576.468:   99.2929%  (        4)
00:07:13.038  21576.468 - 21677.292:   99.3157%  (        4)
00:07:13.038  21677.292 - 21778.117:   99.3385%  (        4)
00:07:13.038  21778.117 - 21878.942:   99.3613%  (        4)
00:07:13.038  21878.942 - 21979.766:   99.3841%  (        4)
00:07:13.038  21979.766 - 22080.591:   99.4069%  (        4)
00:07:13.038  22080.591 - 22181.415:   99.4354%  (        5)
00:07:13.038  22181.415 - 22282.240:   99.4583%  (        4)
00:07:13.038  22282.240 - 22383.065:   99.4811%  (        4)
00:07:13.038  22383.065 - 22483.889:   99.5039%  (        4)
00:07:13.038  22483.889 - 22584.714:   99.5210%  (        3)
00:07:13.038  22584.714 - 22685.538:   99.5438%  (        4)
00:07:13.038  22685.538 - 22786.363:   99.5666%  (        4)
00:07:13.038  22786.363 - 22887.188:   99.5894%  (        4)
00:07:13.038  22887.188 - 22988.012:   99.6122%  (        4)
00:07:13.038  22988.012 - 23088.837:   99.6350%  (        4)
00:07:13.038  26012.751 - 26214.400:   99.6635%  (        5)
00:07:13.038  26214.400 - 26416.049:   99.7092%  (        8)
00:07:13.038  26416.049 - 26617.698:   99.7548%  (        8)
00:07:13.038  26617.698 - 26819.348:   99.8061%  (        9)
00:07:13.038  26819.348 - 27020.997:   99.8460%  (        7)
00:07:13.038  27020.997 - 27222.646:   99.8859%  (        7)
00:07:13.038  27222.646 - 27424.295:   99.9316%  (        8)
00:07:13.038  27424.295 - 27625.945:   99.9772%  (        8)
00:07:13.038  27625.945 - 27827.594:  100.0000%  (        4)
00:07:13.038  
00:07:13.038  Latency histogram for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:07:13.038  ==============================================================================
00:07:13.038         Range in us     Cumulative    IO count
00:07:13.038   5595.766 -  5620.972:    0.0057%  (        1)
00:07:13.038   5772.209 -  5797.415:    0.0114%  (        1)
00:07:13.038   5797.415 -  5822.622:    0.0228%  (        2)
00:07:13.038   5822.622 -  5847.828:    0.0342%  (        2)
00:07:13.038   5873.034 -  5898.240:    0.0456%  (        2)
00:07:13.038   5898.240 -  5923.446:    0.0684%  (        4)
00:07:13.038   5923.446 -  5948.652:    0.0969%  (        5)
00:07:13.038   5948.652 -  5973.858:    0.1597%  (       11)
00:07:13.038   5973.858 -  5999.065:    0.2965%  (       24)
00:07:13.038   5999.065 -  6024.271:    0.5417%  (       43)
00:07:13.038   6024.271 -  6049.477:    0.8041%  (       46)
00:07:13.038   6049.477 -  6074.683:    1.1462%  (       60)
00:07:13.038   6074.683 -  6099.889:    1.5682%  (       74)
00:07:13.038   6099.889 -  6125.095:    1.8020%  (       41)
00:07:13.038   6125.095 -  6150.302:    2.3837%  (      102)
00:07:13.038   6150.302 -  6175.508:    3.1079%  (      127)
00:07:13.038   6175.508 -  6200.714:    3.8891%  (      137)
00:07:13.038   6200.714 -  6225.920:    4.5221%  (      111)
00:07:13.038   6225.920 -  6251.126:    5.2007%  (      119)
00:07:13.038   6251.126 -  6276.332:    6.1017%  (      158)
00:07:13.038   6276.332 -  6301.538:    7.2308%  (      198)
00:07:13.038   6301.538 -  6326.745:    8.8047%  (      276)
00:07:13.038   6326.745 -  6351.951:   10.2589%  (      255)
00:07:13.038   6351.951 -  6377.157:   12.0381%  (      312)
00:07:13.038   6377.157 -  6402.363:   13.9256%  (      331)
00:07:13.038   6402.363 -  6427.569:   15.3513%  (      250)
00:07:13.038   6427.569 -  6452.775:   17.3985%  (      359)
00:07:13.038   6452.775 -  6503.188:   21.1679%  (      661)
00:07:13.038   6503.188 -  6553.600:   24.2872%  (      547)
00:07:13.038   6553.600 -  6604.012:   28.0281%  (      656)
00:07:13.038   6604.012 -  6654.425:   31.9286%  (      684)
00:07:13.038   6654.425 -  6704.837:   35.1562%  (      566)
00:07:13.038   6704.837 -  6755.249:   38.4808%  (      583)
00:07:13.038   6755.249 -  6805.662:   42.3130%  (      672)
00:07:13.038   6805.662 -  6856.074:   46.9035%  (      805)
00:07:13.038   6856.074 -  6906.486:   50.9238%  (      705)
00:07:13.038   6906.486 -  6956.898:   54.9384%  (      704)
00:07:13.038   6956.898 -  7007.311:   57.8410%  (      509)
00:07:13.038   7007.311 -  7057.723:   60.4015%  (      449)
00:07:13.038   7057.723 -  7108.135:   62.7737%  (      416)
00:07:13.038   7108.135 -  7158.548:   65.3855%  (      458)
00:07:13.038   7158.548 -  7208.960:   67.5867%  (      386)
00:07:13.038   7208.960 -  7259.372:   69.2575%  (      293)
00:07:13.038   7259.372 -  7309.785:   70.8999%  (      288)
00:07:13.038   7309.785 -  7360.197:   72.2742%  (      241)
00:07:13.038   7360.197 -  7410.609:   73.5401%  (      222)
00:07:13.038   7410.609 -  7461.022:   74.3955%  (      150)
00:07:13.038   7461.022 -  7511.434:   75.0342%  (      112)
00:07:13.038   7511.434 -  7561.846:   75.7071%  (      118)
00:07:13.038   7561.846 -  7612.258:   76.7450%  (      182)
00:07:13.038   7612.258 -  7662.671:   77.2126%  (       82)
00:07:13.038   7662.671 -  7713.083:   77.7600%  (       96)
00:07:13.038   7713.083 -  7763.495:   78.4843%  (      127)
00:07:13.038   7763.495 -  7813.908:   79.2598%  (      136)
00:07:13.038   7813.908 -  7864.320:   79.8187%  (       98)
00:07:13.038   7864.320 -  7914.732:   80.4231%  (      106)
00:07:13.038   7914.732 -  7965.145:   80.6797%  (       45)
00:07:13.038   7965.145 -  8015.557:   81.0504%  (       65)
00:07:13.038   8015.557 -  8065.969:   81.3583%  (       54)
00:07:13.038   8065.969 -  8116.382:   81.8431%  (       85)
00:07:13.038   8116.382 -  8166.794:   82.2365%  (       69)
00:07:13.038   8166.794 -  8217.206:   82.8182%  (      102)
00:07:13.038   8217.206 -  8267.618:   83.4968%  (      119)
00:07:13.038   8267.618 -  8318.031:   84.4263%  (      163)
00:07:13.038   8318.031 -  8368.443:   85.4528%  (      180)
00:07:13.038   8368.443 -  8418.855:   86.3709%  (      161)
00:07:13.038   8418.855 -  8469.268:   87.5057%  (      199)
00:07:13.038   8469.268 -  8519.680:   88.9713%  (      257)
00:07:13.038   8519.680 -  8570.092:   89.8951%  (      162)
00:07:13.038   8570.092 -  8620.505:   90.8075%  (      160)
00:07:13.038   8620.505 -  8670.917:   91.6743%  (      152)
00:07:13.038   8670.917 -  8721.329:   92.3700%  (      122)
00:07:13.038   8721.329 -  8771.742:   93.2710%  (      158)
00:07:13.038   8771.742 -  8822.154:   94.3602%  (      191)
00:07:13.038   8822.154 -  8872.566:   95.0616%  (      123)
00:07:13.038   8872.566 -  8922.978:   95.8485%  (      138)
00:07:13.038   8922.978 -  8973.391:   96.2705%  (       74)
00:07:13.038   8973.391 -  9023.803:   96.5100%  (       42)
00:07:13.038   9023.803 -  9074.215:   96.6811%  (       30)
00:07:13.038   9074.215 -  9124.628:   96.8237%  (       25)
00:07:13.038   9124.628 -  9175.040:   96.9320%  (       19)
00:07:13.038   9175.040 -  9225.452:   97.0005%  (       12)
00:07:13.038   9225.452 -  9275.865:   97.0518%  (        9)
00:07:13.038   9275.865 -  9326.277:   97.1088%  (       10)
00:07:13.038   9326.277 -  9376.689:   97.1715%  (       11)
00:07:13.038   9376.689 -  9427.102:   97.2343%  (       11)
00:07:13.038   9427.102 -  9477.514:   97.3198%  (       15)
00:07:13.038   9477.514 -  9527.926:   97.3654%  (        8)
00:07:13.038   9527.926 -  9578.338:   97.3939%  (        5)
00:07:13.038   9578.338 -  9628.751:   97.4167%  (        4)
00:07:13.038   9628.751 -  9679.163:   97.4339%  (        3)
00:07:13.038   9679.163 -  9729.575:   97.4453%  (        2)
00:07:13.038   9779.988 -  9830.400:   97.4738%  (        5)
00:07:13.038   9830.400 -  9880.812:   97.5137%  (        7)
00:07:13.038   9880.812 -  9931.225:   97.5593%  (        8)
00:07:13.038   9931.225 -  9981.637:   97.5764%  (        3)
00:07:13.038   9981.637 - 10032.049:   97.5878%  (        2)
00:07:13.038  10032.049 - 10082.462:   97.6106%  (        4)
00:07:13.038  10082.462 - 10132.874:   97.6334%  (        4)
00:07:13.038  10132.874 - 10183.286:   97.6562%  (        4)
00:07:13.038  10183.286 - 10233.698:   97.6905%  (        6)
00:07:13.038  10233.698 - 10284.111:   97.7646%  (       13)
00:07:13.038  10284.111 - 10334.523:   97.8786%  (       20)
00:07:13.038  10334.523 - 10384.935:   97.9642%  (       15)
00:07:13.038  10384.935 - 10435.348:   98.0440%  (       14)
00:07:13.038  10435.348 - 10485.760:   98.1182%  (       13)
00:07:13.038  10485.760 - 10536.172:   98.2208%  (       18)
00:07:13.038  10536.172 - 10586.585:   98.3862%  (       29)
00:07:13.038  10586.585 - 10636.997:   98.5858%  (       35)
00:07:13.038  10636.997 - 10687.409:   98.7112%  (       22)
00:07:13.038  10687.409 - 10737.822:   98.8310%  (       21)
00:07:13.038  10737.822 - 10788.234:   98.9279%  (       17)
00:07:13.038  10788.234 - 10838.646:   99.0021%  (       13)
00:07:13.038  10838.646 - 10889.058:   99.0591%  (       10)
00:07:13.038  10889.058 - 10939.471:   99.1104%  (        9)
00:07:13.038  10939.471 - 10989.883:   99.1731%  (       11)
00:07:13.039  10989.883 - 11040.295:   99.2359%  (       11)
00:07:13.039  11040.295 - 11090.708:   99.2587%  (        4)
00:07:13.039  11090.708 - 11141.120:   99.2701%  (        2)
00:07:13.039  19761.625 - 19862.449:   99.2929%  (        4)
00:07:13.039  19862.449 - 19963.274:   99.3157%  (        4)
00:07:13.039  19963.274 - 20064.098:   99.3385%  (        4)
00:07:13.039  20064.098 - 20164.923:   99.3613%  (        4)
00:07:13.039  20164.923 - 20265.748:   99.3841%  (        4)
00:07:13.039  20265.748 - 20366.572:   99.4126%  (        5)
00:07:13.039  20366.572 - 20467.397:   99.4354%  (        4)
00:07:13.039  20467.397 - 20568.222:   99.4583%  (        4)
00:07:13.039  20568.222 - 20669.046:   99.4811%  (        4)
00:07:13.039  20669.046 - 20769.871:   99.5039%  (        4)
00:07:13.039  20769.871 - 20870.695:   99.5267%  (        4)
00:07:13.039  20870.695 - 20971.520:   99.5438%  (        3)
00:07:13.039  20971.520 - 21072.345:   99.5666%  (        4)
00:07:13.039  21072.345 - 21173.169:   99.5894%  (        4)
00:07:13.039  21173.169 - 21273.994:   99.6122%  (        4)
00:07:13.039  21273.994 - 21374.818:   99.6350%  (        4)
00:07:13.039  24298.732 - 24399.557:   99.6464%  (        2)
00:07:13.039  24399.557 - 24500.382:   99.6693%  (        4)
00:07:13.039  24500.382 - 24601.206:   99.6921%  (        4)
00:07:13.039  24601.206 - 24702.031:   99.7149%  (        4)
00:07:13.039  24702.031 - 24802.855:   99.7434%  (        5)
00:07:13.039  24802.855 - 24903.680:   99.7662%  (        4)
00:07:13.039  24903.680 - 25004.505:   99.7890%  (        4)
00:07:13.039  25004.505 - 25105.329:   99.8118%  (        4)
00:07:13.039  25105.329 - 25206.154:   99.8346%  (        4)
00:07:13.039  25206.154 - 25306.978:   99.8574%  (        4)
00:07:13.039  25306.978 - 25407.803:   99.8802%  (        4)
00:07:13.039  25407.803 - 25508.628:   99.9031%  (        4)
00:07:13.039  25508.628 - 25609.452:   99.9259%  (        4)
00:07:13.039  25609.452 - 25710.277:   99.9487%  (        4)
00:07:13.039  25710.277 - 25811.102:   99.9715%  (        4)
00:07:13.039  25811.102 - 26012.751:  100.0000%  (        5)
00:07:13.039  
00:07:13.039  Latency histogram for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:07:13.039  ==============================================================================
00:07:13.039         Range in us     Cumulative    IO count
00:07:13.039   5772.209 -  5797.415:    0.0057%  (        1)
00:07:13.039   5822.622 -  5847.828:    0.0171%  (        2)
00:07:13.039   5847.828 -  5873.034:    0.0342%  (        3)
00:07:13.039   5873.034 -  5898.240:    0.0798%  (        8)
00:07:13.039   5898.240 -  5923.446:    0.1426%  (       11)
00:07:13.039   5923.446 -  5948.652:    0.2110%  (       12)
00:07:13.039   5948.652 -  5973.858:    0.3764%  (       29)
00:07:13.039   5973.858 -  5999.065:    0.7071%  (       58)
00:07:13.039   5999.065 -  6024.271:    0.8668%  (       28)
00:07:13.039   6024.271 -  6049.477:    1.0208%  (       27)
00:07:13.039   6049.477 -  6074.683:    1.1861%  (       29)
00:07:13.039   6074.683 -  6099.889:    1.5169%  (       58)
00:07:13.039   6099.889 -  6125.095:    2.0244%  (       89)
00:07:13.039   6125.095 -  6150.302:    2.6346%  (      107)
00:07:13.039   6150.302 -  6175.508:    3.2105%  (      101)
00:07:13.039   6175.508 -  6200.714:    3.8777%  (      117)
00:07:13.039   6200.714 -  6225.920:    4.3739%  (       87)
00:07:13.039   6225.920 -  6251.126:    5.0582%  (      120)
00:07:13.039   6251.126 -  6276.332:    6.2728%  (      213)
00:07:13.039   6276.332 -  6301.538:    8.0007%  (      303)
00:07:13.039   6301.538 -  6326.745:    9.0271%  (      180)
00:07:13.039   6326.745 -  6351.951:   10.4927%  (      257)
00:07:13.039   6351.951 -  6377.157:   12.0153%  (      267)
00:07:13.039   6377.157 -  6402.363:   14.3818%  (      415)
00:07:13.039   6402.363 -  6427.569:   16.2922%  (      335)
00:07:13.039   6427.569 -  6452.775:   17.6779%  (      243)
00:07:13.039   6452.775 -  6503.188:   20.4437%  (      485)
00:07:13.039   6503.188 -  6553.600:   23.9621%  (      617)
00:07:13.039   6553.600 -  6604.012:   27.2582%  (      578)
00:07:13.039   6604.012 -  6654.425:   31.0219%  (      660)
00:07:13.039   6654.425 -  6704.837:   34.3978%  (      592)
00:07:13.039   6704.837 -  6755.249:   38.1330%  (      655)
00:07:13.039   6755.249 -  6805.662:   41.9081%  (      662)
00:07:13.039   6805.662 -  6856.074:   46.0652%  (      729)
00:07:13.039   6856.074 -  6906.486:   50.5018%  (      778)
00:07:13.039   6906.486 -  6956.898:   54.5221%  (      705)
00:07:13.039   6956.898 -  7007.311:   57.8581%  (      585)
00:07:13.039   7007.311 -  7057.723:   60.9660%  (      545)
00:07:13.039   7057.723 -  7108.135:   63.6633%  (      473)
00:07:13.039   7108.135 -  7158.548:   65.2657%  (      281)
00:07:13.039   7158.548 -  7208.960:   66.9765%  (      300)
00:07:13.039   7208.960 -  7259.372:   68.7842%  (      317)
00:07:13.039   7259.372 -  7309.785:   70.7174%  (      339)
00:07:13.039   7309.785 -  7360.197:   72.3768%  (      291)
00:07:13.039   7360.197 -  7410.609:   73.1182%  (      130)
00:07:13.039   7410.609 -  7461.022:   74.2130%  (      192)
00:07:13.039   7461.022 -  7511.434:   75.0798%  (      152)
00:07:13.039   7511.434 -  7561.846:   75.7299%  (      114)
00:07:13.039   7561.846 -  7612.258:   76.4028%  (      118)
00:07:13.039   7612.258 -  7662.671:   77.2297%  (      145)
00:07:13.039   7662.671 -  7713.083:   77.9311%  (      123)
00:07:13.039   7713.083 -  7763.495:   78.4158%  (       85)
00:07:13.039   7763.495 -  7813.908:   79.0488%  (      111)
00:07:13.039   7813.908 -  7864.320:   79.9897%  (      165)
00:07:13.039   7864.320 -  7914.732:   80.5657%  (      101)
00:07:13.039   7914.732 -  7965.145:   80.9250%  (       63)
00:07:13.039   7965.145 -  8015.557:   81.3869%  (       81)
00:07:13.039   8015.557 -  8065.969:   81.6549%  (       47)
00:07:13.039   8065.969 -  8116.382:   81.9742%  (       56)
00:07:13.039   8116.382 -  8166.794:   82.3734%  (       70)
00:07:13.039   8166.794 -  8217.206:   82.8866%  (       90)
00:07:13.039   8217.206 -  8267.618:   83.6166%  (      128)
00:07:13.039   8267.618 -  8318.031:   84.7000%  (      190)
00:07:13.039   8318.031 -  8368.443:   85.6353%  (      164)
00:07:13.039   8368.443 -  8418.855:   86.6674%  (      181)
00:07:13.039   8418.855 -  8469.268:   88.0703%  (      246)
00:07:13.039   8469.268 -  8519.680:   89.5757%  (      264)
00:07:13.039   8519.680 -  8570.092:   90.3456%  (      135)
00:07:13.039   8570.092 -  8620.505:   91.0014%  (      115)
00:07:13.039   8620.505 -  8670.917:   91.7028%  (      123)
00:07:13.039   8670.917 -  8721.329:   92.5125%  (      142)
00:07:13.039   8721.329 -  8771.742:   93.1398%  (      110)
00:07:13.039   8771.742 -  8822.154:   94.3545%  (      213)
00:07:13.039   8822.154 -  8872.566:   95.2042%  (      149)
00:07:13.039   8872.566 -  8922.978:   95.7459%  (       95)
00:07:13.039   8922.978 -  8973.391:   96.1622%  (       73)
00:07:13.039   8973.391 -  9023.803:   96.4017%  (       42)
00:07:13.039   9023.803 -  9074.215:   96.6241%  (       39)
00:07:13.039   9074.215 -  9124.628:   96.8009%  (       31)
00:07:13.039   9124.628 -  9175.040:   96.9149%  (       20)
00:07:13.039   9175.040 -  9225.452:   97.0005%  (       15)
00:07:13.039   9225.452 -  9275.865:   97.0461%  (        8)
00:07:13.039   9275.865 -  9326.277:   97.0803%  (        6)
00:07:13.039   9326.277 -  9376.689:   97.1031%  (        4)
00:07:13.039   9376.689 -  9427.102:   97.1544%  (        9)
00:07:13.039   9427.102 -  9477.514:   97.1943%  (        7)
00:07:13.039   9477.514 -  9527.926:   97.3198%  (       22)
00:07:13.039   9527.926 -  9578.338:   97.3597%  (        7)
00:07:13.039   9578.338 -  9628.751:   97.3939%  (        6)
00:07:13.039   9628.751 -  9679.163:   97.4224%  (        5)
00:07:13.039   9679.163 -  9729.575:   97.4909%  (       12)
00:07:13.039   9729.575 -  9779.988:   97.5422%  (        9)
00:07:13.039   9779.988 -  9830.400:   97.5935%  (        9)
00:07:13.039   9830.400 -  9880.812:   97.6163%  (        4)
00:07:13.039   9880.812 -  9931.225:   97.6277%  (        2)
00:07:13.039   9931.225 -  9981.637:   97.6448%  (        3)
00:07:13.039   9981.637 - 10032.049:   97.6620%  (        3)
00:07:13.039  10032.049 - 10082.462:   97.6791%  (        3)
00:07:13.039  10082.462 - 10132.874:   97.6962%  (        3)
00:07:13.039  10132.874 - 10183.286:   97.7532%  (       10)
00:07:13.039  10183.286 - 10233.698:   97.7988%  (        8)
00:07:13.039  10233.698 - 10284.111:   97.8330%  (        6)
00:07:13.039  10284.111 - 10334.523:   97.9300%  (       17)
00:07:13.039  10334.523 - 10384.935:   98.0440%  (       20)
00:07:13.039  10384.935 - 10435.348:   98.1353%  (       16)
00:07:13.039  10435.348 - 10485.760:   98.1866%  (        9)
00:07:13.039  10485.760 - 10536.172:   98.2379%  (        9)
00:07:13.039  10536.172 - 10586.585:   98.2778%  (        7)
00:07:13.039  10586.585 - 10636.997:   98.3292%  (        9)
00:07:13.039  10636.997 - 10687.409:   98.3748%  (        8)
00:07:13.039  10687.409 - 10737.822:   98.4489%  (       13)
00:07:13.039  10737.822 - 10788.234:   98.5401%  (       16)
00:07:13.039  10788.234 - 10838.646:   98.6827%  (       25)
00:07:13.039  10838.646 - 10889.058:   98.7797%  (       17)
00:07:13.039  10889.058 - 10939.471:   98.8994%  (       21)
00:07:13.039  10939.471 - 10989.883:   99.0078%  (       19)
00:07:13.039  10989.883 - 11040.295:   99.0648%  (       10)
00:07:13.039  11040.295 - 11090.708:   99.0990%  (        6)
00:07:13.039  11090.708 - 11141.120:   99.1275%  (        5)
00:07:13.039  11141.120 - 11191.532:   99.1503%  (        4)
00:07:13.039  11191.532 - 11241.945:   99.1731%  (        4)
00:07:13.040  11241.945 - 11292.357:   99.1959%  (        4)
00:07:13.040  11292.357 - 11342.769:   99.2188%  (        4)
00:07:13.040  11342.769 - 11393.182:   99.2359%  (        3)
00:07:13.040  11393.182 - 11443.594:   99.2587%  (        4)
00:07:13.040  11443.594 - 11494.006:   99.2701%  (        2)
00:07:13.040  17946.782 - 18047.606:   99.2758%  (        1)
00:07:13.040  18047.606 - 18148.431:   99.2986%  (        4)
00:07:13.040  18148.431 - 18249.255:   99.3214%  (        4)
00:07:13.040  18249.255 - 18350.080:   99.3442%  (        4)
00:07:13.040  18350.080 - 18450.905:   99.3670%  (        4)
00:07:13.040  18450.905 - 18551.729:   99.3955%  (        5)
00:07:13.040  18551.729 - 18652.554:   99.4183%  (        4)
00:07:13.040  18652.554 - 18753.378:   99.4411%  (        4)
00:07:13.040  18753.378 - 18854.203:   99.4640%  (        4)
00:07:13.040  18854.203 - 18955.028:   99.4925%  (        5)
00:07:13.040  18955.028 - 19055.852:   99.5153%  (        4)
00:07:13.040  19055.852 - 19156.677:   99.5381%  (        4)
00:07:13.040  19156.677 - 19257.502:   99.5609%  (        4)
00:07:13.040  19257.502 - 19358.326:   99.5837%  (        4)
00:07:13.040  19358.326 - 19459.151:   99.6122%  (        5)
00:07:13.040  19459.151 - 19559.975:   99.6350%  (        4)
00:07:13.040  22584.714 - 22685.538:   99.6521%  (        3)
00:07:13.040  22685.538 - 22786.363:   99.6750%  (        4)
00:07:13.040  22786.363 - 22887.188:   99.6978%  (        4)
00:07:13.040  22887.188 - 22988.012:   99.7206%  (        4)
00:07:13.040  22988.012 - 23088.837:   99.7377%  (        3)
00:07:13.040  23088.837 - 23189.662:   99.7662%  (        5)
00:07:13.040  23189.662 - 23290.486:   99.7890%  (        4)
00:07:13.040  23290.486 - 23391.311:   99.8118%  (        4)
00:07:13.040  23391.311 - 23492.135:   99.8346%  (        4)
00:07:13.040  23492.135 - 23592.960:   99.8517%  (        3)
00:07:13.040  23592.960 - 23693.785:   99.8745%  (        4)
00:07:13.040  23693.785 - 23794.609:   99.8974%  (        4)
00:07:13.040  23794.609 - 23895.434:   99.9202%  (        4)
00:07:13.040  23895.434 - 23996.258:   99.9430%  (        4)
00:07:13.040  23996.258 - 24097.083:   99.9658%  (        4)
00:07:13.040  24097.083 - 24197.908:   99.9886%  (        4)
00:07:13.040  24197.908 - 24298.732:  100.0000%  (        2)
00:07:13.040  
00:07:13.040  Latency histogram for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:07:13.040  ==============================================================================
00:07:13.040         Range in us     Cumulative    IO count
00:07:13.040   5494.942 -  5520.148:    0.0057%  (        1)
00:07:13.040   5595.766 -  5620.972:    0.0114%  (        1)
00:07:13.040   5696.591 -  5721.797:    0.0170%  (        1)
00:07:13.040   5747.003 -  5772.209:    0.0227%  (        1)
00:07:13.040   5772.209 -  5797.415:    0.0284%  (        1)
00:07:13.040   5797.415 -  5822.622:    0.0398%  (        2)
00:07:13.040   5822.622 -  5847.828:    0.0568%  (        3)
00:07:13.040   5847.828 -  5873.034:    0.0795%  (        4)
00:07:13.040   5873.034 -  5898.240:    0.0966%  (        3)
00:07:13.040   5898.240 -  5923.446:    0.1307%  (        6)
00:07:13.040   5923.446 -  5948.652:    0.2273%  (       17)
00:07:13.040   5948.652 -  5973.858:    0.3636%  (       24)
00:07:13.040   5973.858 -  5999.065:    0.4943%  (       23)
00:07:13.040   5999.065 -  6024.271:    0.5852%  (       16)
00:07:13.040   6024.271 -  6049.477:    0.7102%  (       22)
00:07:13.040   6049.477 -  6074.683:    0.8693%  (       28)
00:07:13.040   6074.683 -  6099.889:    1.4034%  (       94)
00:07:13.040   6099.889 -  6125.095:    1.8864%  (       85)
00:07:13.040   6125.095 -  6150.302:    2.5455%  (      116)
00:07:13.040   6150.302 -  6175.508:    3.0170%  (       83)
00:07:13.040   6175.508 -  6200.714:    3.6648%  (      114)
00:07:13.040   6200.714 -  6225.920:    4.4659%  (      141)
00:07:13.040   6225.920 -  6251.126:    5.5568%  (      192)
00:07:13.040   6251.126 -  6276.332:    6.3920%  (      147)
00:07:13.040   6276.332 -  6301.538:    7.3977%  (      177)
00:07:13.040   6301.538 -  6326.745:    8.8636%  (      258)
00:07:13.040   6326.745 -  6351.951:   10.3920%  (      269)
00:07:13.040   6351.951 -  6377.157:   12.2386%  (      325)
00:07:13.040   6377.157 -  6402.363:   13.6307%  (      245)
00:07:13.040   6402.363 -  6427.569:   14.9886%  (      239)
00:07:13.040   6427.569 -  6452.775:   16.8580%  (      329)
00:07:13.040   6452.775 -  6503.188:   20.4659%  (      635)
00:07:13.040   6503.188 -  6553.600:   24.2386%  (      664)
00:07:13.040   6553.600 -  6604.012:   27.3295%  (      544)
00:07:13.040   6604.012 -  6654.425:   30.8580%  (      621)
00:07:13.040   6654.425 -  6704.837:   35.1136%  (      749)
00:07:13.040   6704.837 -  6755.249:   39.5511%  (      781)
00:07:13.040   6755.249 -  6805.662:   42.8068%  (      573)
00:07:13.040   6805.662 -  6856.074:   46.2273%  (      602)
00:07:13.040   6856.074 -  6906.486:   50.3125%  (      719)
00:07:13.040   6906.486 -  6956.898:   53.8182%  (      617)
00:07:13.040   6956.898 -  7007.311:   56.5284%  (      477)
00:07:13.040   7007.311 -  7057.723:   60.1932%  (      645)
00:07:13.040   7057.723 -  7108.135:   63.3011%  (      547)
00:07:13.040   7108.135 -  7158.548:   65.6193%  (      408)
00:07:13.040   7158.548 -  7208.960:   67.8466%  (      392)
00:07:13.040   7208.960 -  7259.372:   69.5398%  (      298)
00:07:13.040   7259.372 -  7309.785:   71.1932%  (      291)
00:07:13.040   7309.785 -  7360.197:   71.9318%  (      130)
00:07:13.040   7360.197 -  7410.609:   73.3239%  (      245)
00:07:13.040   7410.609 -  7461.022:   74.1420%  (      144)
00:07:13.040   7461.022 -  7511.434:   74.9375%  (      140)
00:07:13.040   7511.434 -  7561.846:   75.5682%  (      111)
00:07:13.040   7561.846 -  7612.258:   76.4602%  (      157)
00:07:13.040   7612.258 -  7662.671:   77.3182%  (      151)
00:07:13.040   7662.671 -  7713.083:   77.7784%  (       81)
00:07:13.040   7713.083 -  7763.495:   78.3239%  (       96)
00:07:13.040   7763.495 -  7813.908:   78.9148%  (      104)
00:07:13.040   7813.908 -  7864.320:   79.3239%  (       72)
00:07:13.040   7864.320 -  7914.732:   79.8807%  (       98)
00:07:13.040   7914.732 -  7965.145:   80.2955%  (       73)
00:07:13.040   7965.145 -  8015.557:   80.6420%  (       61)
00:07:13.040   8015.557 -  8065.969:   81.3182%  (      119)
00:07:13.040   8065.969 -  8116.382:   82.0795%  (      134)
00:07:13.040   8116.382 -  8166.794:   82.7386%  (      116)
00:07:13.040   8166.794 -  8217.206:   83.4318%  (      122)
00:07:13.040   8217.206 -  8267.618:   84.2330%  (      141)
00:07:13.040   8267.618 -  8318.031:   85.0852%  (      150)
00:07:13.040   8318.031 -  8368.443:   85.9830%  (      158)
00:07:13.040   8368.443 -  8418.855:   86.7841%  (      141)
00:07:13.040   8418.855 -  8469.268:   88.0795%  (      228)
00:07:13.040   8469.268 -  8519.680:   89.2955%  (      214)
00:07:13.040   8519.680 -  8570.092:   90.1875%  (      157)
00:07:13.040   8570.092 -  8620.505:   91.0568%  (      153)
00:07:13.040   8620.505 -  8670.917:   91.7955%  (      130)
00:07:13.040   8670.917 -  8721.329:   92.4375%  (      113)
00:07:13.040   8721.329 -  8771.742:   92.9432%  (       89)
00:07:13.040   8771.742 -  8822.154:   94.0000%  (      186)
00:07:13.040   8822.154 -  8872.566:   94.6989%  (      123)
00:07:13.040   8872.566 -  8922.978:   95.4318%  (      129)
00:07:13.040   8922.978 -  8973.391:   96.0398%  (      107)
00:07:13.040   8973.391 -  9023.803:   96.3068%  (       47)
00:07:13.040   9023.803 -  9074.215:   96.4830%  (       31)
00:07:13.040   9074.215 -  9124.628:   96.6080%  (       22)
00:07:13.040   9124.628 -  9175.040:   96.6875%  (       14)
00:07:13.040   9175.040 -  9225.452:   96.7500%  (       11)
00:07:13.040   9225.452 -  9275.865:   96.8182%  (       12)
00:07:13.040   9275.865 -  9326.277:   96.8920%  (       13)
00:07:13.040   9326.277 -  9376.689:   96.9716%  (       14)
00:07:13.040   9376.689 -  9427.102:   97.0398%  (       12)
00:07:13.040   9427.102 -  9477.514:   97.0795%  (        7)
00:07:13.040   9477.514 -  9527.926:   97.0909%  (        2)
00:07:13.040   9679.163 -  9729.575:   97.1250%  (        6)
00:07:13.040   9729.575 -  9779.988:   97.1989%  (       13)
00:07:13.040   9779.988 -  9830.400:   97.2784%  (       14)
00:07:13.041   9830.400 -  9880.812:   97.3182%  (        7)
00:07:13.041   9880.812 -  9931.225:   97.4545%  (       24)
00:07:13.041   9931.225 -  9981.637:   97.6420%  (       33)
00:07:13.041   9981.637 - 10032.049:   97.8352%  (       34)
00:07:13.041  10032.049 - 10082.462:   97.9034%  (       12)
00:07:13.041  10082.462 - 10132.874:   97.9716%  (       12)
00:07:13.041  10132.874 - 10183.286:   98.0455%  (       13)
00:07:13.041  10183.286 - 10233.698:   98.1250%  (       14)
00:07:13.041  10233.698 - 10284.111:   98.1818%  (       10)
00:07:13.041  10284.111 - 10334.523:   98.2557%  (       13)
00:07:13.041  10334.523 - 10384.935:   98.3182%  (       11)
00:07:13.041  10384.935 - 10435.348:   98.3750%  (       10)
00:07:13.041  10435.348 - 10485.760:   98.4261%  (        9)
00:07:13.041  10485.760 - 10536.172:   98.4602%  (        6)
00:07:13.041  10536.172 - 10586.585:   98.4886%  (        5)
00:07:13.041  10586.585 - 10636.997:   98.5398%  (        9)
00:07:13.041  10636.997 - 10687.409:   98.5852%  (        8)
00:07:13.041  10687.409 - 10737.822:   98.6648%  (       14)
00:07:13.041  10737.822 - 10788.234:   98.7557%  (       16)
00:07:13.041  10788.234 - 10838.646:   98.8011%  (        8)
00:07:13.041  10838.646 - 10889.058:   98.8580%  (       10)
00:07:13.041  10889.058 - 10939.471:   98.9091%  (        9)
00:07:13.041  10939.471 - 10989.883:   98.9716%  (       11)
00:07:13.041  10989.883 - 11040.295:   98.9886%  (        3)
00:07:13.041  11040.295 - 11090.708:   99.0227%  (        6)
00:07:13.041  11090.708 - 11141.120:   99.0455%  (        4)
00:07:13.041  11141.120 - 11191.532:   99.0739%  (        5)
00:07:13.041  11191.532 - 11241.945:   99.1023%  (        5)
00:07:13.041  11241.945 - 11292.357:   99.1307%  (        5)
00:07:13.041  11292.357 - 11342.769:   99.1534%  (        4)
00:07:13.041  11342.769 - 11393.182:   99.1761%  (        4)
00:07:13.041  11393.182 - 11443.594:   99.2045%  (        5)
00:07:13.041  11443.594 - 11494.006:   99.2273%  (        4)
00:07:13.041  11494.006 - 11544.418:   99.2443%  (        3)
00:07:13.041  11544.418 - 11594.831:   99.2557%  (        2)
00:07:13.041  11594.831 - 11645.243:   99.2670%  (        2)
00:07:13.041  11645.243 - 11695.655:   99.2727%  (        1)
00:07:13.041  12502.252 - 12552.665:   99.2784%  (        1)
00:07:13.041  12552.665 - 12603.077:   99.2898%  (        2)
00:07:13.041  12603.077 - 12653.489:   99.3011%  (        2)
00:07:13.041  12653.489 - 12703.902:   99.3125%  (        2)
00:07:13.041  12703.902 - 12754.314:   99.3239%  (        2)
00:07:13.041  12754.314 - 12804.726:   99.3352%  (        2)
00:07:13.041  12804.726 - 12855.138:   99.3466%  (        2)
00:07:13.041  12855.138 - 12905.551:   99.3580%  (        2)
00:07:13.041  12905.551 - 13006.375:   99.3807%  (        4)
00:07:13.041  13006.375 - 13107.200:   99.4034%  (        4)
00:07:13.041  13107.200 - 13208.025:   99.4261%  (        4)
00:07:13.041  13208.025 - 13308.849:   99.4489%  (        4)
00:07:13.041  13308.849 - 13409.674:   99.4716%  (        4)
00:07:13.041  13409.674 - 13510.498:   99.4943%  (        4)
00:07:13.041  13510.498 - 13611.323:   99.5170%  (        4)
00:07:13.041  13611.323 - 13712.148:   99.5455%  (        5)
00:07:13.041  13712.148 - 13812.972:   99.5682%  (        4)
00:07:13.041  13812.972 - 13913.797:   99.5909%  (        4)
00:07:13.041  13913.797 - 14014.622:   99.6136%  (        4)
00:07:13.041  14014.622 - 14115.446:   99.6364%  (        4)
00:07:13.041  17341.834 - 17442.658:   99.6591%  (        4)
00:07:13.041  17442.658 - 17543.483:   99.6818%  (        4)
00:07:13.041  17543.483 - 17644.308:   99.7045%  (        4)
00:07:13.041  17644.308 - 17745.132:   99.7273%  (        4)
00:07:13.041  17745.132 - 17845.957:   99.7500%  (        4)
00:07:13.041  17845.957 - 17946.782:   99.7727%  (        4)
00:07:13.041  17946.782 - 18047.606:   99.7955%  (        4)
00:07:13.041  18047.606 - 18148.431:   99.8182%  (        4)
00:07:13.041  18148.431 - 18249.255:   99.8409%  (        4)
00:07:13.041  18249.255 - 18350.080:   99.8636%  (        4)
00:07:13.041  18350.080 - 18450.905:   99.8864%  (        4)
00:07:13.041  18450.905 - 18551.729:   99.9091%  (        4)
00:07:13.041  18551.729 - 18652.554:   99.9318%  (        4)
00:07:13.041  18652.554 - 18753.378:   99.9545%  (        4)
00:07:13.041  18753.378 - 18854.203:   99.9773%  (        4)
00:07:13.041  18854.203 - 18955.028:  100.0000%  (        4)
00:07:13.041  
00:07:13.041   16:56:35 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:07:13.041  
00:07:13.041  real	0m2.484s
00:07:13.041  user	0m2.215s
00:07:13.041  sys	0m0.181s
00:07:13.041   16:56:35 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:13.041   16:56:35 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:07:13.041  ************************************
00:07:13.041  END TEST nvme_perf
00:07:13.041  ************************************
00:07:13.041   16:56:36 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:07:13.041   16:56:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:13.041   16:56:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.041   16:56:36 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:13.041  ************************************
00:07:13.041  START TEST nvme_hello_world
00:07:13.041  ************************************
00:07:13.041   16:56:36 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:07:13.301  Initializing NVMe Controllers
00:07:13.301  Attached to 0000:00:10.0
00:07:13.301    Namespace ID: 1 size: 6GB
00:07:13.301  Attached to 0000:00:11.0
00:07:13.301    Namespace ID: 1 size: 5GB
00:07:13.301  Attached to 0000:00:13.0
00:07:13.301    Namespace ID: 1 size: 1GB
00:07:13.301  Attached to 0000:00:12.0
00:07:13.301    Namespace ID: 1 size: 4GB
00:07:13.301    Namespace ID: 2 size: 4GB
00:07:13.301    Namespace ID: 3 size: 4GB
00:07:13.301  Initialization complete.
00:07:13.301  INFO: using host memory buffer for IO
00:07:13.301  Hello world!
00:07:13.301  INFO: using host memory buffer for IO
00:07:13.301  Hello world!
00:07:13.301  INFO: using host memory buffer for IO
00:07:13.301  Hello world!
00:07:13.301  INFO: using host memory buffer for IO
00:07:13.301  Hello world!
00:07:13.301  INFO: using host memory buffer for IO
00:07:13.301  Hello world!
00:07:13.301  INFO: using host memory buffer for IO
00:07:13.301  Hello world!
00:07:13.301  
00:07:13.301  real	0m0.234s
00:07:13.301  user	0m0.075s
00:07:13.301  sys	0m0.096s
00:07:13.301   16:56:36 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:13.301   16:56:36 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:07:13.301  ************************************
00:07:13.301  END TEST nvme_hello_world
00:07:13.301  ************************************
00:07:13.301   16:56:36 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:07:13.301   16:56:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:13.301   16:56:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.301   16:56:36 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:13.301  ************************************
00:07:13.301  START TEST nvme_sgl
00:07:13.301  ************************************
00:07:13.301   16:56:36 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:07:13.560  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:07:13.560  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:07:13.560  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:07:13.560  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:07:13.560  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:07:13.560  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:07:13.560  0000:00:11.0: build_io_request_0 Invalid IO length parameter
00:07:13.560  0000:00:11.0: build_io_request_1 Invalid IO length parameter
00:07:13.560  0000:00:11.0: build_io_request_3 Invalid IO length parameter
00:07:13.560  0000:00:11.0: build_io_request_8 Invalid IO length parameter
00:07:13.560  0000:00:11.0: build_io_request_9 Invalid IO length parameter
00:07:13.560  0000:00:11.0: build_io_request_11 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_0 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_1 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_2 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_3 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_4 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_5 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_6 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_7 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_8 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_9 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_10 Invalid IO length parameter
00:07:13.560  0000:00:13.0: build_io_request_11 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_0 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_1 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_2 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_3 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_4 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_5 Invalid IO length parameter
00:07:13.560  0000:00:12.0: build_io_request_6 Invalid IO length parameter
00:07:13.561  0000:00:12.0: build_io_request_7 Invalid IO length parameter
00:07:13.561  0000:00:12.0: build_io_request_8 Invalid IO length parameter
00:07:13.561  0000:00:12.0: build_io_request_9 Invalid IO length parameter
00:07:13.561  0000:00:12.0: build_io_request_10 Invalid IO length parameter
00:07:13.561  0000:00:12.0: build_io_request_11 Invalid IO length parameter
00:07:13.561  NVMe Readv/Writev Request test
00:07:13.561  Attached to 0000:00:10.0
00:07:13.561  Attached to 0000:00:11.0
00:07:13.561  Attached to 0000:00:13.0
00:07:13.561  Attached to 0000:00:12.0
00:07:13.561  0000:00:10.0: build_io_request_2 test passed
00:07:13.561  0000:00:10.0: build_io_request_4 test passed
00:07:13.561  0000:00:10.0: build_io_request_5 test passed
00:07:13.561  0000:00:10.0: build_io_request_6 test passed
00:07:13.561  0000:00:10.0: build_io_request_7 test passed
00:07:13.561  0000:00:10.0: build_io_request_10 test passed
00:07:13.561  0000:00:11.0: build_io_request_2 test passed
00:07:13.561  0000:00:11.0: build_io_request_4 test passed
00:07:13.561  0000:00:11.0: build_io_request_5 test passed
00:07:13.561  0000:00:11.0: build_io_request_6 test passed
00:07:13.561  0000:00:11.0: build_io_request_7 test passed
00:07:13.561  0000:00:11.0: build_io_request_10 test passed
00:07:13.561  Cleaning up...
00:07:13.561  
00:07:13.561  real	0m0.279s
00:07:13.561  user	0m0.134s
00:07:13.561  sys	0m0.101s
00:07:13.561   16:56:36 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:13.561   16:56:36 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:07:13.561  ************************************
00:07:13.561  END TEST nvme_sgl
00:07:13.561  ************************************
00:07:13.561   16:56:36 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:07:13.561   16:56:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:13.561   16:56:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.561   16:56:36 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:13.561  ************************************
00:07:13.561  START TEST nvme_e2edp
00:07:13.561  ************************************
00:07:13.561   16:56:36 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:07:13.820  NVMe Write/Read with End-to-End data protection test
00:07:13.820  Attached to 0000:00:10.0
00:07:13.820  Attached to 0000:00:11.0
00:07:13.820  Attached to 0000:00:13.0
00:07:13.820  Attached to 0000:00:12.0
00:07:13.820  Cleaning up...
00:07:13.820  
00:07:13.820  real	0m0.220s
00:07:13.820  user	0m0.074s
00:07:13.820  sys	0m0.099s
00:07:13.820   16:56:36 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:13.820   16:56:36 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:07:13.820  ************************************
00:07:13.820  END TEST nvme_e2edp
00:07:13.820  ************************************
00:07:13.820   16:56:36 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:07:13.820   16:56:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:13.820   16:56:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.820   16:56:36 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:13.820  ************************************
00:07:13.820  START TEST nvme_reserve
00:07:13.820  ************************************
00:07:13.820   16:56:36 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:07:14.079  =====================================================
00:07:14.079  NVMe Controller at PCI bus 0, device 16, function 0
00:07:14.079  =====================================================
00:07:14.079  Reservations:                Not Supported
00:07:14.079  =====================================================
00:07:14.079  NVMe Controller at PCI bus 0, device 17, function 0
00:07:14.079  =====================================================
00:07:14.079  Reservations:                Not Supported
00:07:14.079  =====================================================
00:07:14.079  NVMe Controller at PCI bus 0, device 19, function 0
00:07:14.079  =====================================================
00:07:14.079  Reservations:                Not Supported
00:07:14.079  =====================================================
00:07:14.079  NVMe Controller at PCI bus 0, device 18, function 0
00:07:14.079  =====================================================
00:07:14.079  Reservations:                Not Supported
00:07:14.079  Reservation test passed
00:07:14.079  
00:07:14.079  real	0m0.216s
00:07:14.079  user	0m0.073s
00:07:14.079  sys	0m0.095s
00:07:14.079   16:56:37 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:14.079   16:56:37 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:07:14.079  ************************************
00:07:14.079  END TEST nvme_reserve
00:07:14.079  ************************************
00:07:14.079   16:56:37 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:07:14.079   16:56:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:14.079   16:56:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:14.079   16:56:37 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:14.079  ************************************
00:07:14.079  START TEST nvme_err_injection
00:07:14.079  ************************************
00:07:14.079   16:56:37 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:07:14.338  NVMe Error Injection test
00:07:14.338  Attached to 0000:00:10.0
00:07:14.338  Attached to 0000:00:11.0
00:07:14.338  Attached to 0000:00:13.0
00:07:14.338  Attached to 0000:00:12.0
00:07:14.338  0000:00:10.0: get features failed as expected
00:07:14.338  0000:00:11.0: get features failed as expected
00:07:14.338  0000:00:13.0: get features failed as expected
00:07:14.338  0000:00:12.0: get features failed as expected
00:07:14.338  0000:00:10.0: get features successfully as expected
00:07:14.338  0000:00:11.0: get features successfully as expected
00:07:14.338  0000:00:13.0: get features successfully as expected
00:07:14.338  0000:00:12.0: get features successfully as expected
00:07:14.338  0000:00:10.0: read failed as expected
00:07:14.338  0000:00:11.0: read failed as expected
00:07:14.338  0000:00:13.0: read failed as expected
00:07:14.338  0000:00:12.0: read failed as expected
00:07:14.338  0000:00:11.0: read successfully as expected
00:07:14.338  0000:00:13.0: read successfully as expected
00:07:14.338  0000:00:12.0: read successfully as expected
00:07:14.338  0000:00:10.0: read successfully as expected
00:07:14.338  Cleaning up...
00:07:14.338  
00:07:14.338  real	0m0.237s
00:07:14.338  user	0m0.086s
00:07:14.338  sys	0m0.093s
00:07:14.338   16:56:37 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:14.338   16:56:37 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:07:14.338  ************************************
00:07:14.338  END TEST nvme_err_injection
00:07:14.338  ************************************
00:07:14.338   16:56:37 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:07:14.338   16:56:37 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:07:14.338   16:56:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:14.338   16:56:37 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:14.338  ************************************
00:07:14.338  START TEST nvme_overhead
00:07:14.338  ************************************
00:07:14.338   16:56:37 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:07:15.717  Initializing NVMe Controllers
00:07:15.717  Attached to 0000:00:10.0
00:07:15.717  Attached to 0000:00:11.0
00:07:15.717  Attached to 0000:00:13.0
00:07:15.717  Attached to 0000:00:12.0
00:07:15.717  Initialization complete. Launching workers.
00:07:15.717  submit (in ns)   avg, min, max =  11393.2,  10153.8,  84478.5
00:07:15.717  complete (in ns) avg, min, max =   7689.2,   7244.6,  63445.4
00:07:15.717  
00:07:15.717  Submit histogram
00:07:15.717  ================
00:07:15.717         Range in us     Cumulative     Count
00:07:15.717     10.142 -    10.191:    0.0061%  (        1)
00:07:15.717     10.191 -    10.240:    0.0184%  (        2)
00:07:15.717     10.289 -    10.338:    0.0246%  (        1)
00:07:15.717     10.338 -    10.388:    0.0307%  (        1)
00:07:15.717     10.388 -    10.437:    0.0368%  (        1)
00:07:15.717     10.585 -    10.634:    0.0430%  (        1)
00:07:15.717     10.683 -    10.732:    0.0491%  (        1)
00:07:15.717     10.732 -    10.782:    0.0614%  (        2)
00:07:15.717     10.782 -    10.831:    0.1780%  (       19)
00:07:15.717     10.831 -    10.880:    0.6876%  (       83)
00:07:15.717     10.880 -    10.929:    2.7810%  (      341)
00:07:15.717     10.929 -    10.978:    8.7605%  (      974)
00:07:15.717     10.978 -    11.028:   19.4855%  (     1747)
00:07:15.717     11.028 -    11.077:   34.3361%  (     2419)
00:07:15.717     11.077 -    11.126:   49.2971%  (     2437)
00:07:15.717     11.126 -    11.175:   61.8331%  (     2042)
00:07:15.717     11.175 -    11.225:   69.8447%  (     1305)
00:07:15.717     11.225 -    11.274:   74.5779%  (      771)
00:07:15.717     11.274 -    11.323:   77.5922%  (      491)
00:07:15.717     11.323 -    11.372:   79.6673%  (      338)
00:07:15.717     11.372 -    11.422:   81.2389%  (      256)
00:07:15.717     11.422 -    11.471:   82.6754%  (      234)
00:07:15.717     11.471 -    11.520:   83.7989%  (      183)
00:07:15.717     11.520 -    11.569:   84.8916%  (      178)
00:07:15.717     11.569 -    11.618:   85.7020%  (      132)
00:07:15.717     11.618 -    11.668:   86.1563%  (       74)
00:07:15.717     11.668 -    11.717:   86.7150%  (       91)
00:07:15.717     11.717 -    11.766:   87.3903%  (      110)
00:07:15.718     11.766 -    11.815:   88.1515%  (      124)
00:07:15.718     11.815 -    11.865:   89.0969%  (      154)
00:07:15.718     11.865 -    11.914:   90.2204%  (      183)
00:07:15.718     11.914 -    11.963:   91.2702%  (      171)
00:07:15.718     11.963 -    12.012:   92.4550%  (      193)
00:07:15.718     12.012 -    12.062:   93.3882%  (      152)
00:07:15.718     12.062 -    12.111:   94.2108%  (      134)
00:07:15.718     12.111 -    12.160:   94.8923%  (      111)
00:07:15.718     12.160 -    12.209:   95.4448%  (       90)
00:07:15.718     12.209 -    12.258:   95.8377%  (       64)
00:07:15.718     12.258 -    12.308:   96.1569%  (       52)
00:07:15.718     12.308 -    12.357:   96.3411%  (       30)
00:07:15.718     12.357 -    12.406:   96.4761%  (       22)
00:07:15.718     12.406 -    12.455:   96.5498%  (       12)
00:07:15.718     12.455 -    12.505:   96.5867%  (        6)
00:07:15.718     12.505 -    12.554:   96.6296%  (        7)
00:07:15.718     12.554 -    12.603:   96.6726%  (        7)
00:07:15.718     12.603 -    12.702:   96.7401%  (       11)
00:07:15.718     12.702 -    12.800:   96.8077%  (       11)
00:07:15.718     12.800 -    12.898:   96.8691%  (       10)
00:07:15.718     12.898 -    12.997:   96.9796%  (       18)
00:07:15.718     12.997 -    13.095:   97.1269%  (       24)
00:07:15.718     13.095 -    13.194:   97.2620%  (       22)
00:07:15.718     13.194 -    13.292:   97.4277%  (       27)
00:07:15.718     13.292 -    13.391:   97.5014%  (       12)
00:07:15.718     13.391 -    13.489:   97.5935%  (       15)
00:07:15.718     13.489 -    13.588:   97.6426%  (        8)
00:07:15.718     13.588 -    13.686:   97.6794%  (        6)
00:07:15.718     13.686 -    13.785:   97.7163%  (        6)
00:07:15.718     13.785 -    13.883:   97.7408%  (        4)
00:07:15.718     13.883 -    13.982:   97.7654%  (        4)
00:07:15.718     13.982 -    14.080:   97.8083%  (        7)
00:07:15.718     14.080 -    14.178:   97.8329%  (        4)
00:07:15.718     14.178 -    14.277:   97.8574%  (        4)
00:07:15.718     14.277 -    14.375:   97.9004%  (        7)
00:07:15.718     14.375 -    14.474:   97.9127%  (        2)
00:07:15.718     14.474 -    14.572:   97.9495%  (        6)
00:07:15.718     14.572 -    14.671:   97.9741%  (        4)
00:07:15.718     14.671 -    14.769:   98.0048%  (        5)
00:07:15.718     14.769 -    14.868:   98.0478%  (        7)
00:07:15.718     14.868 -    14.966:   98.0662%  (        3)
00:07:15.718     14.966 -    15.065:   98.0969%  (        5)
00:07:15.718     15.065 -    15.163:   98.1153%  (        3)
00:07:15.718     15.163 -    15.262:   98.1583%  (        7)
00:07:15.718     15.262 -    15.360:   98.1644%  (        1)
00:07:15.718     15.360 -    15.458:   98.1951%  (        5)
00:07:15.718     15.458 -    15.557:   98.2381%  (        7)
00:07:15.718     15.557 -    15.655:   98.2565%  (        3)
00:07:15.718     15.655 -    15.754:   98.2688%  (        2)
00:07:15.718     15.754 -    15.852:   98.2995%  (        5)
00:07:15.718     15.852 -    15.951:   98.3179%  (        3)
00:07:15.718     15.951 -    16.049:   98.3363%  (        3)
00:07:15.718     16.049 -    16.148:   98.3547%  (        3)
00:07:15.718     16.246 -    16.345:   98.3731%  (        3)
00:07:15.718     16.345 -    16.443:   98.4038%  (        5)
00:07:15.718     16.443 -    16.542:   98.4407%  (        6)
00:07:15.718     16.542 -    16.640:   98.5266%  (       14)
00:07:15.718     16.640 -    16.738:   98.5941%  (       11)
00:07:15.718     16.738 -    16.837:   98.6924%  (       16)
00:07:15.718     16.837 -    16.935:   98.7967%  (       17)
00:07:15.718     16.935 -    17.034:   98.8888%  (       15)
00:07:15.718     17.034 -    17.132:   98.9502%  (       10)
00:07:15.718     17.132 -    17.231:   98.9932%  (        7)
00:07:15.718     17.231 -    17.329:   99.0669%  (       12)
00:07:15.718     17.329 -    17.428:   99.1712%  (       17)
00:07:15.718     17.428 -    17.526:   99.2449%  (       12)
00:07:15.718     17.526 -    17.625:   99.3124%  (       11)
00:07:15.718     17.625 -    17.723:   99.4168%  (       17)
00:07:15.718     17.723 -    17.822:   99.5027%  (       14)
00:07:15.718     17.822 -    17.920:   99.5457%  (        7)
00:07:15.718     17.920 -    18.018:   99.5641%  (        3)
00:07:15.718     18.018 -    18.117:   99.5948%  (        5)
00:07:15.718     18.117 -    18.215:   99.6071%  (        2)
00:07:15.718     18.215 -    18.314:   99.6439%  (        6)
00:07:15.718     18.314 -    18.412:   99.6623%  (        3)
00:07:15.718     18.412 -    18.511:   99.6746%  (        2)
00:07:15.718     18.511 -    18.609:   99.6930%  (        3)
00:07:15.718     18.609 -    18.708:   99.6992%  (        1)
00:07:15.718     19.102 -    19.200:   99.7115%  (        2)
00:07:15.718     19.200 -    19.298:   99.7176%  (        1)
00:07:15.718     19.397 -    19.495:   99.7237%  (        1)
00:07:15.718     19.495 -    19.594:   99.7299%  (        1)
00:07:15.718     19.594 -    19.692:   99.7422%  (        2)
00:07:15.718     19.692 -    19.791:   99.7544%  (        2)
00:07:15.718     19.791 -    19.889:   99.7606%  (        1)
00:07:15.718     19.889 -    19.988:   99.7667%  (        1)
00:07:15.718     20.086 -    20.185:   99.7790%  (        2)
00:07:15.718     20.185 -    20.283:   99.7913%  (        2)
00:07:15.718     20.283 -    20.382:   99.7974%  (        1)
00:07:15.718     20.480 -    20.578:   99.8035%  (        1)
00:07:15.718     20.578 -    20.677:   99.8097%  (        1)
00:07:15.718     20.874 -    20.972:   99.8281%  (        3)
00:07:15.718     20.972 -    21.071:   99.8342%  (        1)
00:07:15.718     21.071 -    21.169:   99.8465%  (        2)
00:07:15.718     21.268 -    21.366:   99.8588%  (        2)
00:07:15.718     21.563 -    21.662:   99.8649%  (        1)
00:07:15.718     21.662 -    21.760:   99.8711%  (        1)
00:07:15.718     21.858 -    21.957:   99.8834%  (        2)
00:07:15.718     22.154 -    22.252:   99.8895%  (        1)
00:07:15.718     22.351 -    22.449:   99.8956%  (        1)
00:07:15.718     22.449 -    22.548:   99.9018%  (        1)
00:07:15.718     22.843 -    22.942:   99.9079%  (        1)
00:07:15.718     22.942 -    23.040:   99.9141%  (        1)
00:07:15.718     23.828 -    23.926:   99.9202%  (        1)
00:07:15.718     24.025 -    24.123:   99.9263%  (        1)
00:07:15.718     25.797 -    25.994:   99.9325%  (        1)
00:07:15.718     26.585 -    26.782:   99.9386%  (        1)
00:07:15.718     28.554 -    28.751:   99.9447%  (        1)
00:07:15.718     30.523 -    30.720:   99.9509%  (        1)
00:07:15.718     32.689 -    32.886:   99.9570%  (        1)
00:07:15.718     33.477 -    33.674:   99.9632%  (        1)
00:07:15.718     34.658 -    34.855:   99.9693%  (        1)
00:07:15.718     42.338 -    42.535:   99.9754%  (        1)
00:07:15.718     44.702 -    44.898:   99.9816%  (        1)
00:07:15.718     55.138 -    55.532:   99.9877%  (        1)
00:07:15.718     84.283 -    84.677:  100.0000%  (        2)
00:07:15.718  
00:07:15.718  Complete histogram
00:07:15.718  ==================
00:07:15.718         Range in us     Cumulative     Count
00:07:15.718      7.237 -     7.286:    0.0123%  (        2)
00:07:15.718      7.286 -     7.335:    0.2149%  (       33)
00:07:15.718      7.335 -     7.385:    2.8854%  (      435)
00:07:15.718      7.385 -     7.434:   14.9979%  (     1973)
00:07:15.718      7.434 -     7.483:   37.6880%  (     3696)
00:07:15.718      7.483 -     7.532:   60.1449%  (     3658)
00:07:15.718      7.532 -     7.582:   76.1311%  (     2604)
00:07:15.718      7.582 -     7.631:   84.9408%  (     1435)
00:07:15.718      7.631 -     7.680:   90.4230%  (      893)
00:07:15.718      7.680 -     7.729:   93.6030%  (      518)
00:07:15.718      7.729 -     7.778:   95.3159%  (      279)
00:07:15.718      7.778 -     7.828:   96.1938%  (      143)
00:07:15.718      7.828 -     7.877:   96.6112%  (       68)
00:07:15.718      7.877 -     7.926:   96.8322%  (       36)
00:07:15.718      7.926 -     7.975:   96.9611%  (       21)
00:07:15.718      7.975 -     8.025:   97.0532%  (       15)
00:07:15.718      8.025 -     8.074:   97.0962%  (        7)
00:07:15.718      8.074 -     8.123:   97.1453%  (        8)
00:07:15.718      8.123 -     8.172:   97.2006%  (        9)
00:07:15.718      8.172 -     8.222:   97.2190%  (        3)
00:07:15.718      8.222 -     8.271:   97.2374%  (        3)
00:07:15.718      8.271 -     8.320:   97.2558%  (        3)
00:07:15.718      8.320 -     8.369:   97.2620%  (        1)
00:07:15.718      8.369 -     8.418:   97.2804%  (        3)
00:07:15.718      8.418 -     8.468:   97.2927%  (        2)
00:07:15.718      8.468 -     8.517:   97.2988%  (        1)
00:07:15.718      8.517 -     8.566:   97.3049%  (        1)
00:07:15.718      8.566 -     8.615:   97.3111%  (        1)
00:07:15.718      8.615 -     8.665:   97.3233%  (        2)
00:07:15.718      8.665 -     8.714:   97.3295%  (        1)
00:07:15.718      8.763 -     8.812:   97.3356%  (        1)
00:07:15.718      9.206 -     9.255:   97.3479%  (        2)
00:07:15.718      9.452 -     9.502:   97.3540%  (        1)
00:07:15.718      9.797 -     9.846:   97.3602%  (        1)
00:07:15.718      9.895 -     9.945:   97.3725%  (        2)
00:07:15.718      9.994 -    10.043:   97.3786%  (        1)
00:07:15.718     10.043 -    10.092:   97.3847%  (        1)
00:07:15.718     10.191 -    10.240:   97.3970%  (        2)
00:07:15.718     10.240 -    10.289:   97.4032%  (        1)
00:07:15.718     10.388 -    10.437:   97.4154%  (        2)
00:07:15.719     10.486 -    10.535:   97.4216%  (        1)
00:07:15.719     10.634 -    10.683:   97.4277%  (        1)
00:07:15.719     10.732 -    10.782:   97.4400%  (        2)
00:07:15.719     10.880 -    10.929:   97.4461%  (        1)
00:07:15.719     10.929 -    10.978:   97.4523%  (        1)
00:07:15.719     10.978 -    11.028:   97.4768%  (        4)
00:07:15.719     11.028 -    11.077:   97.4952%  (        3)
00:07:15.719     11.077 -    11.126:   97.5198%  (        4)
00:07:15.719     11.126 -    11.175:   97.5751%  (        9)
00:07:15.719     11.175 -    11.225:   97.6180%  (        7)
00:07:15.719     11.225 -    11.274:   97.6794%  (       10)
00:07:15.719     11.274 -    11.323:   97.7715%  (       15)
00:07:15.719     11.323 -    11.372:   97.8697%  (       16)
00:07:15.719     11.372 -    11.422:   97.9741%  (       17)
00:07:15.719     11.422 -    11.471:   98.0232%  (        8)
00:07:15.719     11.471 -    11.520:   98.0662%  (        7)
00:07:15.719     11.520 -    11.569:   98.0969%  (        5)
00:07:15.719     11.569 -    11.618:   98.1460%  (        8)
00:07:15.719     11.618 -    11.668:   98.1644%  (        3)
00:07:15.719     11.717 -    11.766:   98.1828%  (        3)
00:07:15.719     11.766 -    11.815:   98.2135%  (        5)
00:07:15.719     11.865 -    11.914:   98.2197%  (        1)
00:07:15.719     11.914 -    11.963:   98.2258%  (        1)
00:07:15.719     11.963 -    12.012:   98.2442%  (        3)
00:07:15.719     12.012 -    12.062:   98.2626%  (        3)
00:07:15.719     12.062 -    12.111:   98.2688%  (        1)
00:07:15.719     12.111 -    12.160:   98.2810%  (        2)
00:07:15.719     12.258 -    12.308:   98.2872%  (        1)
00:07:15.719     12.308 -    12.357:   98.2933%  (        1)
00:07:15.719     12.357 -    12.406:   98.3056%  (        2)
00:07:15.719     12.603 -    12.702:   98.3117%  (        1)
00:07:15.719     12.800 -    12.898:   98.3547%  (        7)
00:07:15.719     12.898 -    12.997:   98.4284%  (       12)
00:07:15.719     12.997 -    13.095:   98.5328%  (       17)
00:07:15.719     13.095 -    13.194:   98.6678%  (       22)
00:07:15.719     13.194 -    13.292:   98.7353%  (       11)
00:07:15.719     13.292 -    13.391:   98.8213%  (       14)
00:07:15.719     13.391 -    13.489:   98.8950%  (       12)
00:07:15.719     13.489 -    13.588:   98.9870%  (       15)
00:07:15.719     13.588 -    13.686:   99.0853%  (       16)
00:07:15.719     13.686 -    13.785:   99.1896%  (       17)
00:07:15.719     13.785 -    13.883:   99.3124%  (       20)
00:07:15.719     13.883 -    13.982:   99.3738%  (       10)
00:07:15.719     13.982 -    14.080:   99.4168%  (        7)
00:07:15.719     14.080 -    14.178:   99.4720%  (        9)
00:07:15.719     14.178 -    14.277:   99.4966%  (        4)
00:07:15.719     14.277 -    14.375:   99.5396%  (        7)
00:07:15.719     14.375 -    14.474:   99.6071%  (       11)
00:07:15.719     14.474 -    14.572:   99.6194%  (        2)
00:07:15.719     14.572 -    14.671:   99.6255%  (        1)
00:07:15.719     14.671 -    14.769:   99.6317%  (        1)
00:07:15.719     14.769 -    14.868:   99.6378%  (        1)
00:07:15.719     14.868 -    14.966:   99.6439%  (        1)
00:07:15.719     14.966 -    15.065:   99.6685%  (        4)
00:07:15.719     15.065 -    15.163:   99.6746%  (        1)
00:07:15.719     15.163 -    15.262:   99.6869%  (        2)
00:07:15.719     15.262 -    15.360:   99.6930%  (        1)
00:07:15.719     15.360 -    15.458:   99.7115%  (        3)
00:07:15.719     15.557 -    15.655:   99.7237%  (        2)
00:07:15.719     15.655 -    15.754:   99.7422%  (        3)
00:07:15.719     15.754 -    15.852:   99.7544%  (        2)
00:07:15.719     15.852 -    15.951:   99.7606%  (        1)
00:07:15.719     16.148 -    16.246:   99.7729%  (        2)
00:07:15.719     16.246 -    16.345:   99.7790%  (        1)
00:07:15.719     16.345 -    16.443:   99.7851%  (        1)
00:07:15.719     16.542 -    16.640:   99.7974%  (        2)
00:07:15.719     16.837 -    16.935:   99.8035%  (        1)
00:07:15.719     17.132 -    17.231:   99.8158%  (        2)
00:07:15.719     17.329 -    17.428:   99.8220%  (        1)
00:07:15.719     17.625 -    17.723:   99.8342%  (        2)
00:07:15.719     18.018 -    18.117:   99.8404%  (        1)
00:07:15.719     18.215 -    18.314:   99.8465%  (        1)
00:07:15.719     18.412 -    18.511:   99.8527%  (        1)
00:07:15.719     18.511 -    18.609:   99.8649%  (        2)
00:07:15.719     18.806 -    18.905:   99.8711%  (        1)
00:07:15.719     18.905 -    19.003:   99.8772%  (        1)
00:07:15.719     19.102 -    19.200:   99.8834%  (        1)
00:07:15.719     19.200 -    19.298:   99.9018%  (        3)
00:07:15.719     19.495 -    19.594:   99.9079%  (        1)
00:07:15.719     19.594 -    19.692:   99.9141%  (        1)
00:07:15.719     19.692 -    19.791:   99.9202%  (        1)
00:07:15.719     20.382 -    20.480:   99.9263%  (        1)
00:07:15.719     20.578 -    20.677:   99.9386%  (        2)
00:07:15.719     21.366 -    21.465:   99.9447%  (        1)
00:07:15.719     22.646 -    22.745:   99.9509%  (        1)
00:07:15.719     27.766 -    27.963:   99.9632%  (        2)
00:07:15.719     33.280 -    33.477:   99.9693%  (        1)
00:07:15.719     33.674 -    33.871:   99.9754%  (        1)
00:07:15.719     39.188 -    39.385:   99.9816%  (        1)
00:07:15.719     59.865 -    60.258:   99.9877%  (        1)
00:07:15.719     61.834 -    62.228:   99.9939%  (        1)
00:07:15.719     63.409 -    63.803:  100.0000%  (        1)
00:07:15.719  
00:07:15.719  
00:07:15.719  real	0m1.222s
00:07:15.719  user	0m1.073s
00:07:15.719  sys	0m0.100s
00:07:15.719   16:56:38 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:15.719   16:56:38 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:07:15.719  ************************************
00:07:15.719  END TEST nvme_overhead
00:07:15.719  ************************************
00:07:15.719   16:56:38 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:07:15.719   16:56:38 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:15.719   16:56:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:15.719   16:56:38 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:15.719  ************************************
00:07:15.719  START TEST nvme_arbitration
00:07:15.719  ************************************
00:07:15.719   16:56:38 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:07:19.007  Initializing NVMe Controllers
00:07:19.007  Attached to 0000:00:10.0
00:07:19.007  Attached to 0000:00:11.0
00:07:19.007  Attached to 0000:00:13.0
00:07:19.007  Attached to 0000:00:12.0
00:07:19.007  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:07:19.007  Associating QEMU NVMe Ctrl       (12341               ) with lcore 1
00:07:19.007  Associating QEMU NVMe Ctrl       (12343               ) with lcore 2
00:07:19.007  Associating QEMU NVMe Ctrl       (12342               ) with lcore 3
00:07:19.007  Associating QEMU NVMe Ctrl       (12342               ) with lcore 0
00:07:19.007  Associating QEMU NVMe Ctrl       (12342               ) with lcore 1
00:07:19.007  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:07:19.007  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:07:19.007  Initialization complete. Launching workers.
00:07:19.007  Starting thread on core 1 with urgent priority queue
00:07:19.007  Starting thread on core 2 with urgent priority queue
00:07:19.007  Starting thread on core 3 with urgent priority queue
00:07:19.007  Starting thread on core 0 with urgent priority queue
00:07:19.007  QEMU NVMe Ctrl       (12340               ) core 0:   938.67 IO/s   106.53 secs/100000 ios
00:07:19.007  QEMU NVMe Ctrl       (12342               ) core 0:   938.67 IO/s   106.53 secs/100000 ios
00:07:19.007  QEMU NVMe Ctrl       (12341               ) core 1:   917.33 IO/s   109.01 secs/100000 ios
00:07:19.007  QEMU NVMe Ctrl       (12342               ) core 1:   917.33 IO/s   109.01 secs/100000 ios
00:07:19.007  QEMU NVMe Ctrl       (12343               ) core 2:   938.67 IO/s   106.53 secs/100000 ios
00:07:19.007  QEMU NVMe Ctrl       (12342               ) core 3:   853.33 IO/s   117.19 secs/100000 ios
00:07:19.007  ========================================================
00:07:19.007  
00:07:19.007  
00:07:19.007  real	0m3.282s
00:07:19.007  user	0m9.227s
00:07:19.007  sys	0m0.111s
00:07:19.007  ************************************
00:07:19.007  END TEST nvme_arbitration
00:07:19.007  ************************************
00:07:19.007   16:56:41 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:19.007   16:56:41 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:07:19.007   16:56:41 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:07:19.007   16:56:41 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:07:19.007   16:56:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:19.007   16:56:41 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:19.007  ************************************
00:07:19.007  START TEST nvme_single_aen
00:07:19.007  ************************************
00:07:19.007   16:56:41 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:07:19.268  Asynchronous Event Request test
00:07:19.268  Attached to 0000:00:10.0
00:07:19.268  Attached to 0000:00:11.0
00:07:19.268  Attached to 0000:00:13.0
00:07:19.268  Attached to 0000:00:12.0
00:07:19.268  Reset controller to setup AER completions for this process
00:07:19.268  Registering asynchronous event callbacks...
00:07:19.268  Getting orig temperature thresholds of all controllers
00:07:19.268  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:19.268  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:19.268  0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:19.268  0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:19.268  Setting all controllers temperature threshold low to trigger AER
00:07:19.268  Waiting for all controllers temperature threshold to be set lower
00:07:19.268  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:19.268  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:07:19.268  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:19.268  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:07:19.268  0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:19.268  aer_cb - Resetting Temp Threshold for device: 0000:00:13.0
00:07:19.268  0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:19.268  aer_cb - Resetting Temp Threshold for device: 0000:00:12.0
00:07:19.268  Waiting for all controllers to trigger AER and reset threshold
00:07:19.268  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:19.268  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:19.268  0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:19.268  0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:19.268  Cleaning up...
00:07:19.268  
00:07:19.268  real	0m0.208s
00:07:19.268  user	0m0.067s
00:07:19.268  sys	0m0.102s
00:07:19.268  ************************************
00:07:19.268  END TEST nvme_single_aen
00:07:19.268  ************************************
00:07:19.268   16:56:42 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:19.268   16:56:42 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:07:19.268   16:56:42 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:07:19.268   16:56:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:19.268   16:56:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:19.268   16:56:42 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:19.268  ************************************
00:07:19.268  START TEST nvme_doorbell_aers
00:07:19.268  ************************************
00:07:19.268   16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:07:19.268   16:56:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:07:19.268   16:56:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:07:19.268   16:56:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:07:19.268    16:56:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:07:19.268    16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:19.268    16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:07:19.268    16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:19.268     16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:19.268     16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:19.268    16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:07:19.268    16:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:07:19.268   16:56:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:19.268   16:56:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:19.604  [2024-12-09 16:56:42.526536] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:29.579  Executing: test_write_invalid_db
00:07:29.579  Waiting for AER completion...
00:07:29.579  Failure: test_write_invalid_db
00:07:29.579  
00:07:29.579  Executing: test_invalid_db_write_overflow_sq
00:07:29.579  Waiting for AER completion...
00:07:29.579  Failure: test_invalid_db_write_overflow_sq
00:07:29.579  
00:07:29.579  Executing: test_invalid_db_write_overflow_cq
00:07:29.579  Waiting for AER completion...
00:07:29.579  Failure: test_invalid_db_write_overflow_cq
00:07:29.579  
00:07:29.579   16:56:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:29.579   16:56:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0'
00:07:29.579  [2024-12-09 16:56:52.532054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:39.554  Executing: test_write_invalid_db
00:07:39.554  Waiting for AER completion...
00:07:39.554  Failure: test_write_invalid_db
00:07:39.554  
00:07:39.554  Executing: test_invalid_db_write_overflow_sq
00:07:39.554  Waiting for AER completion...
00:07:39.554  Failure: test_invalid_db_write_overflow_sq
00:07:39.554  
00:07:39.554  Executing: test_invalid_db_write_overflow_cq
00:07:39.554  Waiting for AER completion...
00:07:39.554  Failure: test_invalid_db_write_overflow_cq
00:07:39.554  
00:07:39.554   16:57:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:39.554   16:57:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0'
00:07:39.554  [2024-12-09 16:57:02.573604] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:49.519  Executing: test_write_invalid_db
00:07:49.519  Waiting for AER completion...
00:07:49.519  Failure: test_write_invalid_db
00:07:49.519  
00:07:49.519  Executing: test_invalid_db_write_overflow_sq
00:07:49.519  Waiting for AER completion...
00:07:49.519  Failure: test_invalid_db_write_overflow_sq
00:07:49.519  
00:07:49.519  Executing: test_invalid_db_write_overflow_cq
00:07:49.519  Waiting for AER completion...
00:07:49.519  Failure: test_invalid_db_write_overflow_cq
00:07:49.519  
00:07:49.519   16:57:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:49.519   16:57:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0'
00:07:49.777  [2024-12-09 16:57:12.594524] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.739  Executing: test_write_invalid_db
00:07:59.739  Waiting for AER completion...
00:07:59.739  Failure: test_write_invalid_db
00:07:59.739  
00:07:59.739  Executing: test_invalid_db_write_overflow_sq
00:07:59.739  Waiting for AER completion...
00:07:59.739  Failure: test_invalid_db_write_overflow_sq
00:07:59.739  
00:07:59.739  Executing: test_invalid_db_write_overflow_cq
00:07:59.739  Waiting for AER completion...
00:07:59.739  Failure: test_invalid_db_write_overflow_cq
00:07:59.739  
00:07:59.739  
00:07:59.739  real	0m40.194s
00:07:59.739  user	0m34.265s
00:07:59.739  sys	0m5.528s
00:07:59.739   16:57:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:59.739   16:57:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:07:59.739  ************************************
00:07:59.739  END TEST nvme_doorbell_aers
00:07:59.739  ************************************
00:07:59.740    16:57:22 nvme -- nvme/nvme.sh@97 -- # uname
00:07:59.740   16:57:22 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:07:59.740   16:57:22 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:07:59.740   16:57:22 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:59.740   16:57:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:59.740   16:57:22 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:59.740  ************************************
00:07:59.740  START TEST nvme_multi_aen
00:07:59.740  ************************************
00:07:59.740   16:57:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:07:59.740  [2024-12-09 16:57:22.633484] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.633547] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.633557] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.634792] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.634821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.634829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.635934] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.636041] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.636097] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.637123] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.637217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  [2024-12-09 16:57:22.637272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64476) is not found. Dropping the request.
00:07:59.740  Child process pid: 65002
00:07:59.997  [Child] Asynchronous Event Request test
00:07:59.997  [Child] Attached to 0000:00:10.0
00:07:59.997  [Child] Attached to 0000:00:11.0
00:07:59.997  [Child] Attached to 0000:00:13.0
00:07:59.997  [Child] Attached to 0000:00:12.0
00:07:59.997  [Child] Registering asynchronous event callbacks...
00:07:59.997  [Child] Getting orig temperature thresholds of all controllers
00:07:59.997  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  [Child] Waiting for all controllers to trigger AER and reset threshold
00:07:59.997  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  [Child] 0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  [Child] 0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  [Child] 0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  [Child] Cleaning up...
00:07:59.997  Asynchronous Event Request test
00:07:59.997  Attached to 0000:00:10.0
00:07:59.997  Attached to 0000:00:11.0
00:07:59.997  Attached to 0000:00:13.0
00:07:59.997  Attached to 0000:00:12.0
00:07:59.997  Reset controller to setup AER completions for this process
00:07:59.997  Registering asynchronous event callbacks...
00:07:59.997  Getting orig temperature thresholds of all controllers
00:07:59.997  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:59.997  Setting all controllers temperature threshold low to trigger AER
00:07:59.997  Waiting for all controllers temperature threshold to be set lower
00:07:59.997  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:07:59.997  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:07:59.997  0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  aer_cb - Resetting Temp Threshold for device: 0000:00:13.0
00:07:59.997  0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:59.997  aer_cb - Resetting Temp Threshold for device: 0000:00:12.0
00:07:59.997  Waiting for all controllers to trigger AER and reset threshold
00:07:59.997  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:59.997  Cleaning up...
00:07:59.997  ************************************
00:07:59.997  END TEST nvme_multi_aen
00:07:59.997  ************************************
00:07:59.997  
00:07:59.997  real	0m0.398s
00:07:59.997  user	0m0.127s
00:07:59.997  sys	0m0.175s
00:07:59.997   16:57:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:59.997   16:57:22 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:07:59.997   16:57:22 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:07:59.997   16:57:22 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:59.997   16:57:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:59.997   16:57:22 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:59.997  ************************************
00:07:59.997  START TEST nvme_startup
00:07:59.997  ************************************
00:07:59.997   16:57:22 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:08:00.255  Initializing NVMe Controllers
00:08:00.255  Attached to 0000:00:10.0
00:08:00.255  Attached to 0000:00:11.0
00:08:00.255  Attached to 0000:00:13.0
00:08:00.255  Attached to 0000:00:12.0
00:08:00.255  Initialization complete.
00:08:00.255  Time used:149723.016      (us).
00:08:00.255  
00:08:00.255  real	0m0.212s
00:08:00.255  user	0m0.069s
00:08:00.255  sys	0m0.093s
00:08:00.255   16:57:23 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:00.255  ************************************
00:08:00.255  END TEST nvme_startup
00:08:00.255  ************************************
00:08:00.255   16:57:23 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:08:00.255   16:57:23 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:08:00.255   16:57:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:00.255   16:57:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:00.255   16:57:23 nvme -- common/autotest_common.sh@10 -- # set +x
00:08:00.255  ************************************
00:08:00.255  START TEST nvme_multi_secondary
00:08:00.255  ************************************
00:08:00.255   16:57:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:08:00.255   16:57:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65053
00:08:00.255   16:57:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:08:00.255   16:57:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65054
00:08:00.255   16:57:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:08:00.255   16:57:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:08:03.552  Initializing NVMe Controllers
00:08:03.552  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:08:03.552  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:08:03.552  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:08:03.552  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:08:03.552  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:08:03.552  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:08:03.552  Associating PCIE (0000:00:13.0) NSID 1 with lcore 1
00:08:03.552  Associating PCIE (0000:00:12.0) NSID 1 with lcore 1
00:08:03.552  Associating PCIE (0000:00:12.0) NSID 2 with lcore 1
00:08:03.552  Associating PCIE (0000:00:12.0) NSID 3 with lcore 1
00:08:03.552  Initialization complete. Launching workers.
00:08:03.552  ========================================================
00:08:03.552                                                                             Latency(us)
00:08:03.552  Device Information                     :       IOPS      MiB/s    Average        min        max
00:08:03.552  PCIE (0000:00:10.0) NSID 1 from core  1:    7795.01      30.45    2051.08     709.06    6165.43
00:08:03.552  PCIE (0000:00:11.0) NSID 1 from core  1:    7795.01      30.45    2052.15     730.32    5902.67
00:08:03.552  PCIE (0000:00:13.0) NSID 1 from core  1:    7795.01      30.45    2052.10     729.25    5600.96
00:08:03.552  PCIE (0000:00:12.0) NSID 1 from core  1:    7795.01      30.45    2052.05     724.03    5710.25
00:08:03.552  PCIE (0000:00:12.0) NSID 2 from core  1:    7794.68      30.45    2052.09     723.89    5649.96
00:08:03.552  PCIE (0000:00:12.0) NSID 3 from core  1:    7794.35      30.45    2052.17     717.06    5954.11
00:08:03.552  ========================================================
00:08:03.552  Total                                  :   46769.08     182.69    2051.94     709.06    6165.43
00:08:03.552  
00:08:03.552  Initializing NVMe Controllers
00:08:03.552  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:08:03.552  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:08:03.552  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:08:03.552  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:08:03.552  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:08:03.552  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:08:03.552  Associating PCIE (0000:00:13.0) NSID 1 with lcore 2
00:08:03.552  Associating PCIE (0000:00:12.0) NSID 1 with lcore 2
00:08:03.552  Associating PCIE (0000:00:12.0) NSID 2 with lcore 2
00:08:03.552  Associating PCIE (0000:00:12.0) NSID 3 with lcore 2
00:08:03.552  Initialization complete. Launching workers.
00:08:03.552  ========================================================
00:08:03.552                                                                             Latency(us)
00:08:03.552  Device Information                     :       IOPS      MiB/s    Average        min        max
00:08:03.552  PCIE (0000:00:10.0) NSID 1 from core  2:    3174.33      12.40    5039.05    1122.65   14774.08
00:08:03.552  PCIE (0000:00:11.0) NSID 1 from core  2:    3174.33      12.40    5040.07    1088.33   14869.29
00:08:03.552  PCIE (0000:00:13.0) NSID 1 from core  2:    3174.33      12.40    5039.99    1084.77   17255.05
00:08:03.552  PCIE (0000:00:12.0) NSID 1 from core  2:    3174.33      12.40    5037.40    1061.17   18752.97
00:08:03.552  PCIE (0000:00:12.0) NSID 2 from core  2:    3174.33      12.40    5033.24    1067.45   15920.04
00:08:03.552  PCIE (0000:00:12.0) NSID 3 from core  2:    3174.33      12.40    5033.42    1184.65   15569.27
00:08:03.552  ========================================================
00:08:03.552  Total                                  :   19045.99      74.40    5037.20    1061.17   18752.97
00:08:03.552  
00:08:03.552   16:57:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65053
00:08:06.073  Initializing NVMe Controllers
00:08:06.073  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:08:06.073  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:08:06.073  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:08:06.073  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:08:06.073  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:08:06.073  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:08:06.073  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:08:06.073  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:08:06.073  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:08:06.073  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:08:06.073  Initialization complete. Launching workers.
00:08:06.073  ========================================================
00:08:06.073                                                                             Latency(us)
00:08:06.073  Device Information                     :       IOPS      MiB/s    Average        min        max
00:08:06.073  PCIE (0000:00:10.0) NSID 1 from core  0:   11232.55      43.88    1423.14     675.96    6613.34
00:08:06.073  PCIE (0000:00:11.0) NSID 1 from core  0:   11236.15      43.89    1423.56     669.79    7274.86
00:08:06.073  PCIE (0000:00:13.0) NSID 1 from core  0:   11206.15      43.77    1427.33     630.94    7052.14
00:08:06.073  PCIE (0000:00:12.0) NSID 1 from core  0:   11242.95      43.92    1422.63     627.60    6106.83
00:08:06.073  PCIE (0000:00:12.0) NSID 2 from core  0:   11239.55      43.90    1423.04     612.80    5639.53
00:08:06.073  PCIE (0000:00:12.0) NSID 3 from core  0:   11245.15      43.93    1422.31     603.70    6315.16
00:08:06.073  ========================================================
00:08:06.073  Total                                  :   67402.49     263.29    1423.67     603.70    7274.86
00:08:06.073  
00:08:06.073   16:57:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65054
00:08:06.073   16:57:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65123
00:08:06.073   16:57:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:08:06.073   16:57:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65124
00:08:06.073   16:57:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:08:06.073   16:57:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:08:09.345  Initializing NVMe Controllers
00:08:09.345  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:08:09.345  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:08:09.345  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:08:09.345  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:08:09.345  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:08:09.345  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:08:09.345  Associating PCIE (0000:00:13.0) NSID 1 with lcore 1
00:08:09.345  Associating PCIE (0000:00:12.0) NSID 1 with lcore 1
00:08:09.345  Associating PCIE (0000:00:12.0) NSID 2 with lcore 1
00:08:09.345  Associating PCIE (0000:00:12.0) NSID 3 with lcore 1
00:08:09.345  Initialization complete. Launching workers.
00:08:09.345  ========================================================
00:08:09.345                                                                             Latency(us)
00:08:09.345  Device Information                     :       IOPS      MiB/s    Average        min        max
00:08:09.345  PCIE (0000:00:10.0) NSID 1 from core  1:    8211.93      32.08    1947.00     666.71    6945.04
00:08:09.345  PCIE (0000:00:11.0) NSID 1 from core  1:    8211.93      32.08    1947.98     688.06    6138.40
00:08:09.345  PCIE (0000:00:13.0) NSID 1 from core  1:    8211.93      32.08    1947.93     697.91    5935.90
00:08:09.345  PCIE (0000:00:12.0) NSID 1 from core  1:    8211.93      32.08    1947.90     682.13    5821.95
00:08:09.345  PCIE (0000:00:12.0) NSID 2 from core  1:    8211.93      32.08    1947.89     688.78    5999.88
00:08:09.345  PCIE (0000:00:12.0) NSID 3 from core  1:    8211.93      32.08    1947.89     674.56    6416.46
00:08:09.345  ========================================================
00:08:09.345  Total                                  :   49271.61     192.47    1947.76     666.71    6945.04
00:08:09.345  
00:08:09.345  Initializing NVMe Controllers
00:08:09.345  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:08:09.345  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:08:09.345  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:08:09.345  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:08:09.345  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:08:09.345  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:08:09.345  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:08:09.345  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:08:09.345  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:08:09.345  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:08:09.345  Initialization complete. Launching workers.
00:08:09.345  ========================================================
00:08:09.345                                                                             Latency(us)
00:08:09.345  Device Information                     :       IOPS      MiB/s    Average        min        max
00:08:09.345  PCIE (0000:00:10.0) NSID 1 from core  0:    8001.45      31.26    1998.18     715.86    5236.95
00:08:09.345  PCIE (0000:00:11.0) NSID 1 from core  0:    8001.45      31.26    1999.24     759.08    5614.92
00:08:09.345  PCIE (0000:00:13.0) NSID 1 from core  0:    8001.45      31.26    1999.21     756.87    5399.65
00:08:09.345  PCIE (0000:00:12.0) NSID 1 from core  0:    8001.45      31.26    1999.16     756.43    5440.04
00:08:09.345  PCIE (0000:00:12.0) NSID 2 from core  0:    8001.45      31.26    1999.14     759.82    5242.41
00:08:09.345  PCIE (0000:00:12.0) NSID 3 from core  0:    8001.45      31.26    1999.10     741.30    5306.79
00:08:09.345  ========================================================
00:08:09.345  Total                                  :   48008.68     187.53    1999.01     715.86    5614.92
00:08:09.345  
00:08:11.242  Initializing NVMe Controllers
00:08:11.243  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:08:11.243  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:08:11.243  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:08:11.243  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:08:11.243  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:08:11.243  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:08:11.243  Associating PCIE (0000:00:13.0) NSID 1 with lcore 2
00:08:11.243  Associating PCIE (0000:00:12.0) NSID 1 with lcore 2
00:08:11.243  Associating PCIE (0000:00:12.0) NSID 2 with lcore 2
00:08:11.243  Associating PCIE (0000:00:12.0) NSID 3 with lcore 2
00:08:11.243  Initialization complete. Launching workers.
00:08:11.243  ========================================================
00:08:11.243                                                                             Latency(us)
00:08:11.243  Device Information                     :       IOPS      MiB/s    Average        min        max
00:08:11.243  PCIE (0000:00:10.0) NSID 1 from core  2:    4634.11      18.10    3450.23     739.38   20958.29
00:08:11.243  PCIE (0000:00:11.0) NSID 1 from core  2:    4634.11      18.10    3451.96     716.03   20999.98
00:08:11.243  PCIE (0000:00:13.0) NSID 1 from core  2:    4634.11      18.10    3451.72     767.25   19704.19
00:08:11.243  PCIE (0000:00:12.0) NSID 1 from core  2:    4634.11      18.10    3452.19     761.12   20019.57
00:08:11.243  PCIE (0000:00:12.0) NSID 2 from core  2:    4634.11      18.10    3452.12     768.29   20293.81
00:08:11.243  PCIE (0000:00:12.0) NSID 3 from core  2:    4634.11      18.10    3451.72     760.88   22631.25
00:08:11.243  ========================================================
00:08:11.243  Total                                  :   27804.68     108.61    3451.66     716.03   22631.25
00:08:11.243  
00:08:11.243  ************************************
00:08:11.243  END TEST nvme_multi_secondary
00:08:11.243  ************************************
00:08:11.243   16:57:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65123
00:08:11.243   16:57:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65124
00:08:11.243  
00:08:11.243  real	0m10.878s
00:08:11.243  user	0m18.347s
00:08:11.243  sys	0m0.608s
00:08:11.243   16:57:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:11.243   16:57:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:08:11.243   16:57:34 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:08:11.243   16:57:34 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64091 ]]
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1094 -- # kill 64091
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1095 -- # wait 64091
00:08:11.243  [2024-12-09 16:57:34.072235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.072298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.072325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.072345] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.075006] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.075231] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.075375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.075447] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.078104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.078161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.078184] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.078207] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.080484] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.080636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.080655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243  [2024-12-09 16:57:34.080672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65000) is not found. Dropping the request.
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:08:11.243   16:57:34 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:11.243   16:57:34 nvme -- common/autotest_common.sh@10 -- # set +x
00:08:11.243  ************************************
00:08:11.243  START TEST bdev_nvme_reset_stuck_adm_cmd
00:08:11.243  ************************************
00:08:11.243   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:08:11.503  * Looking for test storage...
00:08:11.503  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:11.503     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:11.503    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:11.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.503  		--rc genhtml_branch_coverage=1
00:08:11.503  		--rc genhtml_function_coverage=1
00:08:11.503  		--rc genhtml_legend=1
00:08:11.503  		--rc geninfo_all_blocks=1
00:08:11.503  		--rc geninfo_unexecuted_blocks=1
00:08:11.503  		
00:08:11.503  		'
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:11.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.504  		--rc genhtml_branch_coverage=1
00:08:11.504  		--rc genhtml_function_coverage=1
00:08:11.504  		--rc genhtml_legend=1
00:08:11.504  		--rc geninfo_all_blocks=1
00:08:11.504  		--rc geninfo_unexecuted_blocks=1
00:08:11.504  		
00:08:11.504  		'
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:11.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.504  		--rc genhtml_branch_coverage=1
00:08:11.504  		--rc genhtml_function_coverage=1
00:08:11.504  		--rc genhtml_legend=1
00:08:11.504  		--rc geninfo_all_blocks=1
00:08:11.504  		--rc geninfo_unexecuted_blocks=1
00:08:11.504  		
00:08:11.504  		'
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:11.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.504  		--rc genhtml_branch_coverage=1
00:08:11.504  		--rc genhtml_function_coverage=1
00:08:11.504  		--rc genhtml_legend=1
00:08:11.504  		--rc geninfo_all_blocks=1
00:08:11.504  		--rc geninfo_unexecuted_blocks=1
00:08:11.504  		
00:08:11.504  		'
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:08:11.504     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:08:11.504     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:11.504     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:08:11.504     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:11.504      16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:11.504      16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:08:11.504     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:08:11.504     16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:08:11.504    16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65291
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65291
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65291 ']'
00:08:11.504  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:11.504   16:57:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:08:11.762  [2024-12-09 16:57:34.568943] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:08:11.762  [2024-12-09 16:57:34.569103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65291 ]
00:08:11.762  [2024-12-09 16:57:34.737278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:12.020  [2024-12-09 16:57:34.823602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:12.020  [2024-12-09 16:57:34.823806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:12.020  [2024-12-09 16:57:34.824037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:12.020  [2024-12-09 16:57:34.824062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:08:12.588  nvme0n1
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:12.588    16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fa1PR.txt
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:08:12.588  true
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:12.588    16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733763455
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65314
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:08:12.588   16:57:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:08:14.548  [2024-12-09 16:57:37.506378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:08:14.548  [2024-12-09 16:57:37.506623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:08:14.548  [2024-12-09 16:57:37.506644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:08:14.548  [2024-12-09 16:57:37.506655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:08:14.548  [2024-12-09 16:57:37.508611] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:08:14.548  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65314
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65314
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65314
00:08:14.548    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:14.548   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:08:14.548    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fa1PR.txt
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:08:14.808     16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:08:14.808      16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:08:14.808     16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:08:14.808     16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:08:14.808      16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:08:14.808     16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fa1PR.txt
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65291
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65291 ']'
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65291
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:14.808    16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65291
00:08:14.808  killing process with pid 65291
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65291'
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65291
00:08:14.808   16:57:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65291
00:08:16.182   16:57:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:08:16.182   16:57:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:08:16.182  ************************************
00:08:16.182  END TEST bdev_nvme_reset_stuck_adm_cmd
00:08:16.182  ************************************
00:08:16.182  
00:08:16.182  real	0m4.576s
00:08:16.182  user	0m16.202s
00:08:16.182  sys	0m0.508s
00:08:16.182   16:57:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:16.182   16:57:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:08:16.182   16:57:38 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:08:16.182   16:57:38 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:08:16.182   16:57:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:16.182   16:57:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:16.182   16:57:38 nvme -- common/autotest_common.sh@10 -- # set +x
00:08:16.182  ************************************
00:08:16.182  START TEST nvme_fio
00:08:16.182  ************************************
00:08:16.182   16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:08:16.182    16:57:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:08:16.182    16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:16.182    16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:08:16.182    16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:16.182     16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:16.182     16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:08:16.182    16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:08:16.182    16:57:38 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0')
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:08:16.182   16:57:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:08:16.182   16:57:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:08:16.182   16:57:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:08:16.441   16:57:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:08:16.441   16:57:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:08:16.441    16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:08:16.441    16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:16.441    16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:08:16.441   16:57:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:08:16.699  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:08:16.699  fio-3.35
00:08:16.699  Starting 1 thread
00:08:21.964  
00:08:21.964  test: (groupid=0, jobs=1): err= 0: pid=65448: Mon Dec  9 16:57:44 2024
00:08:21.964    read: IOPS=21.5k, BW=84.0MiB/s (88.1MB/s)(168MiB/2001msec)
00:08:21.964      slat (nsec): min=3854, max=84122, avg=5902.71, stdev=2603.41
00:08:21.964      clat (usec): min=251, max=8736, avg=2972.65, stdev=919.72
00:08:21.964       lat (usec): min=256, max=8741, avg=2978.55, stdev=921.29
00:08:21.964      clat percentiles (usec):
00:08:21.964       |  1.00th=[ 2409],  5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540],
00:08:21.964       | 30.00th=[ 2573], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638],
00:08:21.964       | 70.00th=[ 2737], 80.00th=[ 3130], 90.00th=[ 3785], 95.00th=[ 5407],
00:08:21.964       | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7701], 99.95th=[ 8225],
00:08:21.964       | 99.99th=[ 8455]
00:08:21.964     bw (  KiB/s): min=83168, max=89072, per=99.47%, avg=85605.33, stdev=3083.66, samples=3
00:08:21.964     iops        : min=20792, max=22268, avg=21401.33, stdev=770.91, samples=3
00:08:21.964    write: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets
00:08:21.964      slat (nsec): min=4155, max=58434, avg=6377.80, stdev=2533.27
00:08:21.964      clat (usec): min=200, max=8690, avg=2974.82, stdev=913.64
00:08:21.964       lat (usec): min=206, max=8696, avg=2981.20, stdev=915.14
00:08:21.964      clat percentiles (usec):
00:08:21.964       |  1.00th=[ 2409],  5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540],
00:08:21.964       | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2606], 60.00th=[ 2638],
00:08:21.964       | 70.00th=[ 2737], 80.00th=[ 3163], 90.00th=[ 3785], 95.00th=[ 5407],
00:08:21.964       | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 8094],
00:08:21.964       | 99.99th=[ 8455]
00:08:21.964     bw (  KiB/s): min=83200, max=89496, per=100.00%, avg=85744.00, stdev=3317.28, samples=3
00:08:21.964     iops        : min=20800, max=22374, avg=21436.00, stdev=829.32, samples=3
00:08:21.964    lat (usec)   : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02%
00:08:21.964    lat (msec)   : 2=0.14%, 4=90.73%, 10=9.07%
00:08:21.964    cpu          : usr=99.10%, sys=0.05%, ctx=4, majf=0, minf=608
00:08:21.964    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:08:21.964       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:08:21.964       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:21.964       issued rwts: total=43050,42731,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:21.964       latency   : target=0, window=0, percentile=100.00%, depth=128
00:08:21.964  
00:08:21.964  Run status group 0 (all jobs):
00:08:21.964     READ: bw=84.0MiB/s (88.1MB/s), 84.0MiB/s-84.0MiB/s (88.1MB/s-88.1MB/s), io=168MiB (176MB), run=2001-2001msec
00:08:21.964    WRITE: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec
00:08:22.228  -----------------------------------------------------
00:08:22.229  Suppressions used:
00:08:22.229    count      bytes template
00:08:22.229        1         32 /usr/src/fio/parse.c
00:08:22.229        1          8 libtcmalloc_minimal.so
00:08:22.229  -----------------------------------------------------
00:08:22.229  
00:08:22.229   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:08:22.229   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:08:22.229   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:08:22.229   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:08:22.489   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:08:22.489   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:08:22.748   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:08:22.748   16:57:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:08:22.748    16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:08:22.748    16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:22.748    16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:08:22.748   16:57:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:08:22.748  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:08:22.748  fio-3.35
00:08:22.748  Starting 1 thread
00:08:29.302  
00:08:29.302  test: (groupid=0, jobs=1): err= 0: pid=65508: Mon Dec  9 16:57:51 2024
00:08:29.302    read: IOPS=22.3k, BW=87.2MiB/s (91.4MB/s)(175MiB/2001msec)
00:08:29.302      slat (usec): min=3, max=119, avg= 5.41, stdev= 2.27
00:08:29.302      clat (usec): min=223, max=10276, avg=2863.34, stdev=767.81
00:08:29.302       lat (usec): min=227, max=10395, avg=2868.75, stdev=769.12
00:08:29.302      clat percentiles (usec):
00:08:29.302       |  1.00th=[ 1893],  5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2507],
00:08:29.302       | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671],
00:08:29.302       | 70.00th=[ 2769], 80.00th=[ 2966], 90.00th=[ 3458], 95.00th=[ 4621],
00:08:29.302       | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 7767],
00:08:29.302       | 99.99th=[10028]
00:08:29.302     bw (  KiB/s): min=84000, max=91496, per=98.89%, avg=88309.33, stdev=3872.05, samples=3
00:08:29.302     iops        : min=21000, max=22874, avg=22077.33, stdev=968.01, samples=3
00:08:29.302    write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec); 0 zone resets
00:08:29.302      slat (usec): min=3, max=617, avg= 5.81, stdev= 3.72
00:08:29.302      clat (usec): min=231, max=10130, avg=2863.92, stdev=764.10
00:08:29.302       lat (usec): min=236, max=10148, avg=2869.73, stdev=765.46
00:08:29.302      clat percentiles (usec):
00:08:29.302       |  1.00th=[ 1893],  5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2507],
00:08:29.302       | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671],
00:08:29.302       | 70.00th=[ 2769], 80.00th=[ 2966], 90.00th=[ 3458], 95.00th=[ 4621],
00:08:29.302       | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6849], 99.95th=[ 8094],
00:08:29.302       | 99.99th=[ 9765]
00:08:29.302     bw (  KiB/s): min=83960, max=92392, per=99.72%, avg=88464.00, stdev=4245.41, samples=3
00:08:29.302     iops        : min=20990, max=23098, avg=22116.00, stdev=1061.35, samples=3
00:08:29.302    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04%
00:08:29.302    lat (msec)   : 2=1.32%, 4=91.57%, 10=7.04%, 20=0.01%
00:08:29.302    cpu          : usr=99.10%, sys=0.10%, ctx=5, majf=0, minf=609
00:08:29.302    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:08:29.302       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:08:29.302       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:29.302       issued rwts: total=44673,44379,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:29.302       latency   : target=0, window=0, percentile=100.00%, depth=128
00:08:29.302  
00:08:29.302  Run status group 0 (all jobs):
00:08:29.302     READ: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=175MiB (183MB), run=2001-2001msec
00:08:29.302    WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec
00:08:29.302  -----------------------------------------------------
00:08:29.302  Suppressions used:
00:08:29.302    count      bytes template
00:08:29.302        1         32 /usr/src/fio/parse.c
00:08:29.302        1          8 libtcmalloc_minimal.so
00:08:29.302  -----------------------------------------------------
00:08:29.302  
00:08:29.302   16:57:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:08:29.302   16:57:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:08:29.302   16:57:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0'
00:08:29.302   16:57:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:08:29.302   16:57:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0'
00:08:29.302   16:57:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:08:29.302   16:57:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:08:29.302   16:57:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:08:29.302    16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:29.302    16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:08:29.302    16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:08:29.302   16:57:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:08:29.561  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:08:29.561  fio-3.35
00:08:29.561  Starting 1 thread
00:08:37.675  
00:08:37.675  test: (groupid=0, jobs=1): err= 0: pid=65564: Mon Dec  9 16:57:59 2024
00:08:37.675    read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec)
00:08:37.675      slat (nsec): min=3381, max=58206, avg=4963.72, stdev=2073.19
00:08:37.675      clat (usec): min=691, max=11730, avg=2686.34, stdev=753.35
00:08:37.675       lat (usec): min=701, max=11788, avg=2691.30, stdev=754.65
00:08:37.675      clat percentiles (usec):
00:08:37.675       |  1.00th=[ 1647],  5.00th=[ 2089], 10.00th=[ 2311], 20.00th=[ 2409],
00:08:37.675       | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573],
00:08:37.675       | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2933], 95.00th=[ 4293],
00:08:37.675       | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7635], 99.95th=[ 7898],
00:08:37.675       | 99.99th=[11338]
00:08:37.675     bw (  KiB/s): min=91880, max=100096, per=100.00%, avg=95474.67, stdev=4203.12, samples=3
00:08:37.675     iops        : min=22970, max=25024, avg=23868.67, stdev=1050.78, samples=3
00:08:37.675    write: IOPS=23.6k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec); 0 zone resets
00:08:37.675      slat (nsec): min=3540, max=77034, avg=5241.40, stdev=2140.93
00:08:37.675      clat (usec): min=616, max=11520, avg=2687.79, stdev=754.50
00:08:37.675       lat (usec): min=626, max=11537, avg=2693.03, stdev=755.80
00:08:37.675      clat percentiles (usec):
00:08:37.675       |  1.00th=[ 1631],  5.00th=[ 2073], 10.00th=[ 2311], 20.00th=[ 2409],
00:08:37.675       | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573],
00:08:37.675       | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2933], 95.00th=[ 4293],
00:08:37.675       | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 7504], 99.95th=[ 8160],
00:08:37.675       | 99.99th=[11076]
00:08:37.675     bw (  KiB/s): min=91592, max=101456, per=100.00%, avg=95485.33, stdev=5249.87, samples=3
00:08:37.675     iops        : min=22898, max=25364, avg=23871.33, stdev=1312.47, samples=3
00:08:37.675    lat (usec)   : 750=0.01%, 1000=0.04%
00:08:37.675    lat (msec)   : 2=3.73%, 4=90.56%, 10=5.64%, 20=0.02%
00:08:37.675    cpu          : usr=99.20%, sys=0.10%, ctx=4, majf=0, minf=608
00:08:37.675    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:08:37.675       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:08:37.675       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:37.675       issued rwts: total=47625,47318,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:37.675       latency   : target=0, window=0, percentile=100.00%, depth=128
00:08:37.675  
00:08:37.675  Run status group 0 (all jobs):
00:08:37.675     READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec
00:08:37.675    WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec
00:08:37.675  -----------------------------------------------------
00:08:37.675  Suppressions used:
00:08:37.675    count      bytes template
00:08:37.675        1         32 /usr/src/fio/parse.c
00:08:37.675        1          8 libtcmalloc_minimal.so
00:08:37.675  -----------------------------------------------------
00:08:37.675  
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0'
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0'
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:08:37.675   16:57:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:08:37.675    16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:08:37.675    16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:08:37.675    16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:08:37.675   16:57:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:08:37.675  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:08:37.675  fio-3.35
00:08:37.675  Starting 1 thread
00:08:47.693  
00:08:47.693  test: (groupid=0, jobs=1): err= 0: pid=65625: Mon Dec  9 16:58:09 2024
00:08:47.693    read: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec)
00:08:47.693      slat (usec): min=4, max=142, avg= 5.13, stdev= 2.39
00:08:47.693      clat (usec): min=208, max=8824, avg=2808.96, stdev=922.41
00:08:47.693       lat (usec): min=213, max=8829, avg=2814.09, stdev=923.79
00:08:47.693      clat percentiles (usec):
00:08:47.693       |  1.00th=[ 1795],  5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2409],
00:08:47.693       | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540],
00:08:47.693       | 70.00th=[ 2606], 80.00th=[ 2802], 90.00th=[ 3851], 95.00th=[ 5080],
00:08:47.693       | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7570], 99.95th=[ 7898],
00:08:47.693       | 99.99th=[ 8586]
00:08:47.693     bw (  KiB/s): min=90544, max=98384, per=100.00%, avg=94642.67, stdev=3932.20, samples=3
00:08:47.693     iops        : min=22636, max=24596, avg=23660.67, stdev=983.05, samples=3
00:08:47.693    write: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec); 0 zone resets
00:08:47.693      slat (usec): min=4, max=111, avg= 5.42, stdev= 2.39
00:08:47.693      clat (usec): min=244, max=8856, avg=2815.88, stdev=918.61
00:08:47.693       lat (usec): min=249, max=8861, avg=2821.30, stdev=920.04
00:08:47.693      clat percentiles (usec):
00:08:47.693       |  1.00th=[ 1827],  5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409],
00:08:47.693       | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540],
00:08:47.693       | 70.00th=[ 2606], 80.00th=[ 2802], 90.00th=[ 3884], 95.00th=[ 5080],
00:08:47.693       | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7439], 99.95th=[ 7570],
00:08:47.693       | 99.99th=[ 8094]
00:08:47.693     bw (  KiB/s): min=90056, max=97856, per=100.00%, avg=94626.67, stdev=4069.32, samples=3
00:08:47.693     iops        : min=22514, max=24464, avg=23656.67, stdev=1017.33, samples=3
00:08:47.693    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03%
00:08:47.693    lat (msec)   : 2=1.74%, 4=88.79%, 10=9.42%
00:08:47.693    cpu          : usr=98.95%, sys=0.10%, ctx=18, majf=0, minf=607
00:08:47.693    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:08:47.693       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:08:47.693       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:47.693       issued rwts: total=45496,45228,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:47.693       latency   : target=0, window=0, percentile=100.00%, depth=128
00:08:47.693  
00:08:47.693  Run status group 0 (all jobs):
00:08:47.693     READ: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec
00:08:47.693    WRITE: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec
00:08:47.693  -----------------------------------------------------
00:08:47.693  Suppressions used:
00:08:47.693    count      bytes template
00:08:47.693        1         32 /usr/src/fio/parse.c
00:08:47.693        1          8 libtcmalloc_minimal.so
00:08:47.693  -----------------------------------------------------
00:08:47.693  
00:08:47.693  ************************************
00:08:47.693  END TEST nvme_fio
00:08:47.693  ************************************
00:08:47.693   16:58:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:08:47.693   16:58:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:08:47.693  
00:08:47.693  real	0m30.613s
00:08:47.693  user	0m19.371s
00:08:47.693  sys	0m20.119s
00:08:47.693   16:58:09 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:47.693   16:58:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:47.693  ************************************
00:08:47.693  END TEST nvme
00:08:47.693  ************************************
00:08:47.693  
00:08:47.693  real	1m39.594s
00:08:47.693  user	3m38.876s
00:08:47.693  sys	0m30.450s
00:08:47.693   16:58:09 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:47.693   16:58:09 nvme -- common/autotest_common.sh@10 -- # set +x
00:08:47.693   16:58:09  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:08:47.694   16:58:09  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:08:47.694   16:58:09  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:47.694   16:58:09  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:47.694   16:58:09  -- common/autotest_common.sh@10 -- # set +x
00:08:47.694  ************************************
00:08:47.694  START TEST nvme_scc
00:08:47.694  ************************************
00:08:47.694   16:58:09 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:08:47.694  * Looking for test storage...
00:08:47.694  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:47.694      16:58:09 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:47.694      16:58:09 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@345 -- # : 1
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:47.694     16:58:09 nvme_scc -- scripts/common.sh@368 -- # return 0
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:47.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.694  		--rc genhtml_branch_coverage=1
00:08:47.694  		--rc genhtml_function_coverage=1
00:08:47.694  		--rc genhtml_legend=1
00:08:47.694  		--rc geninfo_all_blocks=1
00:08:47.694  		--rc geninfo_unexecuted_blocks=1
00:08:47.694  		
00:08:47.694  		'
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:47.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.694  		--rc genhtml_branch_coverage=1
00:08:47.694  		--rc genhtml_function_coverage=1
00:08:47.694  		--rc genhtml_legend=1
00:08:47.694  		--rc geninfo_all_blocks=1
00:08:47.694  		--rc geninfo_unexecuted_blocks=1
00:08:47.694  		
00:08:47.694  		'
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:47.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.694  		--rc genhtml_branch_coverage=1
00:08:47.694  		--rc genhtml_function_coverage=1
00:08:47.694  		--rc genhtml_legend=1
00:08:47.694  		--rc geninfo_all_blocks=1
00:08:47.694  		--rc geninfo_unexecuted_blocks=1
00:08:47.694  		
00:08:47.694  		'
00:08:47.694     16:58:09 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:47.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.694  		--rc genhtml_branch_coverage=1
00:08:47.694  		--rc genhtml_function_coverage=1
00:08:47.694  		--rc genhtml_legend=1
00:08:47.694  		--rc geninfo_all_blocks=1
00:08:47.694  		--rc geninfo_unexecuted_blocks=1
00:08:47.694  		
00:08:47.694  		'
00:08:47.694    16:58:09 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:08:47.694       16:58:09 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:08:47.694      16:58:09 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:47.694      16:58:09 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:47.694       16:58:09 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.694       16:58:09 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.694       16:58:09 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.694       16:58:09 nvme_scc -- paths/export.sh@5 -- # export PATH
00:08:47.694       16:58:09 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:08:47.694     16:58:09 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:08:47.694    16:58:09 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:47.694    16:58:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:08:47.694   16:58:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:08:47.694   16:58:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:08:47.694   16:58:09 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:08:47.694  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:47.694  Waiting for block devices as requested
00:08:47.694  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:08:47.694  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:08:47.694  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:08:47.694  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:08:52.969  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:08:52.969   16:58:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:08:52.969   16:58:15 nvme_scc -- scripts/common.sh@18 -- # local i
00:08:52.969   16:58:15 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:08:52.969   16:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:52.969   16:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:08:52.969    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.969   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:08:52.970    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.970   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:08:52.971    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.971   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:08:52.972    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:08:52.972   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:52.973   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:08:52.973    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:08:52.974    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.974   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:08:52.975   16:58:15 nvme_scc -- scripts/common.sh@18 -- # local i
00:08:52.975   16:58:15 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:08:52.975   16:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:52.975   16:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:08:52.975    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.975   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:08:52.976    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.976   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:08:52.977    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.977   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()'
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"'
00:08:52.978    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.978   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"'
00:08:52.979    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a
00:08:52.979   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:08:52.980    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.980   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0
00:08:52.981   16:58:15 nvme_scc -- scripts/common.sh@18 -- # local i
00:08:52.981   16:58:15 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:08:52.981   16:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:52.981   16:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()'
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12342                ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342               "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342               '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl                          "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl                          '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0   "'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0   '
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"'
00:08:52.981    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.981   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"'
00:08:52.982    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.982   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"'
00:08:52.983    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.983   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12342 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-'
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=-
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()'
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"'
00:08:52.984    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.984   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()'
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"'
00:08:52.985    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.985   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"'
00:08:52.986    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.986   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()'
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"'
00:08:52.987    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.987   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.988    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]]
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.988   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()'
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"'
00:08:52.989    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.989   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()'
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"'
00:08:52.990    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.990   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]]
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()'
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.991    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3
00:08:52.991   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"'
00:08:52.992    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.992   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0
00:08:52.993   16:58:15 nvme_scc -- scripts/common.sh@18 -- # local i
00:08:52.993   16:58:15 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:08:52.993   16:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:08:52.993   16:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()'
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12343                ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343               "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343               '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl                          "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl                          '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0   "'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0   '
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x2 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"'
00:08:52.993    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400
00:08:52.993   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x88010 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"'
00:08:52.994    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.994   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"'
00:08:52.995    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.995   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:fdp-subsys3 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-'
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"'
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=-
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3
00:08:52.996   16:58:15 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 ))
00:08:52.996    16:58:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 ))
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:08:52.996      16:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:08:52.996     16:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 ))
00:08:52.996    16:58:15 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1
00:08:52.997    16:58:15 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:08:52.997   16:58:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1
00:08:52.997   16:58:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:08:52.997   16:58:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:08:53.562  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:53.820  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:08:53.820  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:08:53.820  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:08:54.079  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:08:54.079   16:58:16 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:08:54.079   16:58:16 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:08:54.079   16:58:16 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:54.079   16:58:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:08:54.079  ************************************
00:08:54.079  START TEST nvme_simple_copy
00:08:54.079  ************************************
00:08:54.079   16:58:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:08:54.337  Initializing NVMe Controllers
00:08:54.337  Attaching to 0000:00:10.0
00:08:54.337  Controller supports SCC. Attached to 0000:00:10.0
00:08:54.337    Namespace ID: 1 size: 6GB
00:08:54.337  Initialization complete.
00:08:54.337  
00:08:54.337  Controller QEMU NVMe Ctrl       (12340               )
00:08:54.337  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:08:54.337  Namespace Block Size:4096
00:08:54.337  Writing LBAs 0 to 63 with Random Data
00:08:54.337  Copied LBAs from 0 - 63 to the Destination LBA 256
00:08:54.337  LBAs matching Written Data: 64
00:08:54.337  ************************************
00:08:54.337  END TEST nvme_simple_copy
00:08:54.337  ************************************
00:08:54.337  
00:08:54.337  real	0m0.254s
00:08:54.337  user	0m0.086s
00:08:54.337  sys	0m0.066s
00:08:54.337   16:58:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:54.337   16:58:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:08:54.337  ************************************
00:08:54.337  END TEST nvme_scc
00:08:54.337  ************************************
00:08:54.337  
00:08:54.337  real	0m7.641s
00:08:54.337  user	0m1.113s
00:08:54.337  sys	0m1.390s
00:08:54.337   16:58:17 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:54.337   16:58:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:08:54.337   16:58:17  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:08:54.337   16:58:17  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:08:54.337   16:58:17  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:08:54.337   16:58:17  -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]]
00:08:54.337   16:58:17  -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh
00:08:54.337   16:58:17  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:54.337   16:58:17  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:54.337   16:58:17  -- common/autotest_common.sh@10 -- # set +x
00:08:54.337  ************************************
00:08:54.337  START TEST nvme_fdp
00:08:54.337  ************************************
00:08:54.337   16:58:17 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh
00:08:54.337  * Looking for test storage...
00:08:54.337  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:08:54.337     16:58:17 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:54.337      16:58:17 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version
00:08:54.337      16:58:17 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:54.595     16:58:17 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-:
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-:
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<'
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@345 -- # : 1
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:54.595     16:58:17 nvme_fdp -- scripts/common.sh@368 -- # return 0
00:08:54.595     16:58:17 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:54.595     16:58:17 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:54.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:54.595  		--rc genhtml_branch_coverage=1
00:08:54.595  		--rc genhtml_function_coverage=1
00:08:54.595  		--rc genhtml_legend=1
00:08:54.595  		--rc geninfo_all_blocks=1
00:08:54.595  		--rc geninfo_unexecuted_blocks=1
00:08:54.595  		
00:08:54.595  		'
00:08:54.595     16:58:17 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:54.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:54.595  		--rc genhtml_branch_coverage=1
00:08:54.595  		--rc genhtml_function_coverage=1
00:08:54.595  		--rc genhtml_legend=1
00:08:54.595  		--rc geninfo_all_blocks=1
00:08:54.595  		--rc geninfo_unexecuted_blocks=1
00:08:54.595  		
00:08:54.595  		'
00:08:54.595     16:58:17 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:54.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:54.595  		--rc genhtml_branch_coverage=1
00:08:54.595  		--rc genhtml_function_coverage=1
00:08:54.595  		--rc genhtml_legend=1
00:08:54.595  		--rc geninfo_all_blocks=1
00:08:54.595  		--rc geninfo_unexecuted_blocks=1
00:08:54.595  		
00:08:54.595  		'
00:08:54.595     16:58:17 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:54.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:54.595  		--rc genhtml_branch_coverage=1
00:08:54.595  		--rc genhtml_function_coverage=1
00:08:54.595  		--rc genhtml_legend=1
00:08:54.595  		--rc geninfo_all_blocks=1
00:08:54.595  		--rc geninfo_unexecuted_blocks=1
00:08:54.595  		
00:08:54.595  		'
00:08:54.595    16:58:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:08:54.595       16:58:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:08:54.595      16:58:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:54.595      16:58:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:54.595       16:58:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:54.595       16:58:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:54.595       16:58:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:54.595       16:58:17 nvme_fdp -- paths/export.sh@5 -- # export PATH
00:08:54.595       16:58:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=()
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=()
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=()
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:08:54.595     16:58:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name=
00:08:54.595    16:58:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:54.595   16:58:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:08:54.852  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:08:54.852  Waiting for block devices as requested
00:08:54.852  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:08:54.852  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:08:55.110  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:08:55.110  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:09:00.391  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:09:00.391   16:58:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:09:00.391   16:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i
00:09:00.391   16:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:09:00.391   16:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:00.391   16:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.391    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:09:00.391    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:09:00.391   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:09:00.392    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.392   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:09:00.393    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:09:00.393   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.394   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:09:00.394    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.395    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.395   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.396   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:09:00.396    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.397    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:09:00.397   16:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i
00:09:00.397   16:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:09:00.397   16:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:00.397   16:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:09:00.397   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:09:00.398    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.398   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:09:00.399    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.399   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()'
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"'
00:09:00.400    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:09:00.400   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"'
00:09:00.401    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.401   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.402   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:09:00.402    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:09:00.403    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]]
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0
00:09:00.403   16:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i
00:09:00.403   16:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:09:00.403   16:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:00.403   16:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.403   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()'
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12342                ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342               "'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342               '
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl                          "'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl                          '
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0   "'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0   '
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"'
00:09:00.404    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.404   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"'
00:09:00.405    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.405   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12342 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-'
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=-
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()'
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"'
00:09:00.406    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.406   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"'
00:09:00.407    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.407   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()'
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"'
00:09:00.408    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.408   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()'
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.409   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"'
00:09:00.409    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"'
00:09:00.410    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128
00:09:00.410   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()'
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"'
00:09:00.411    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.411   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.412   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"'
00:09:00.412    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.676    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]]
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val
00:09:00.676   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()'
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.677   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"'
00:09:00.677    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()'
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"'
00:09:00.678    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0
00:09:00.678   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:09:00.679    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]]
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0
00:09:00.679   16:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0
00:09:00.680   16:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i
00:09:00.680   16:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:09:00.680   16:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:00.680   16:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()'
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12343                ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343               "'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343               '
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl                          "'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl                          '
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0   "'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0   '
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x2 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x88010 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"'
00:09:00.680    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.680   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.681   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"'
00:09:00.681    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:fdp-subsys3 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-'
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"'
00:09:00.682    16:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=-
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3
00:09:00.682   16:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns
00:09:00.683   16:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0
00:09:00.683   16:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3
00:09:00.683   16:58:23 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 ))
00:09:00.683    16:58:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp
00:09:00.683    16:58:23 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp
00:09:00.683    16:58:23 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 ))
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]]
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:09:00.683      16:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:09:00.683     16:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:09:00.683    16:58:23 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 ))
00:09:00.683    16:58:23 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3
00:09:00.683    16:58:23 nvme_fdp -- nvme/functions.sh@209 -- # return 0
00:09:00.683   16:58:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3
00:09:00.683   16:58:23 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0
00:09:00.683   16:58:23 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:09:01.254  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:01.513  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:09:01.513  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:09:01.773  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:09:01.773  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:09:01.773   16:58:24 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0'
00:09:01.773   16:58:24 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:01.773   16:58:24 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:01.773   16:58:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x
00:09:01.773  ************************************
00:09:01.773  START TEST nvme_flexible_data_placement
00:09:01.773  ************************************
00:09:01.773   16:58:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0'
00:09:02.035  Initializing NVMe Controllers
00:09:02.035  Attaching to 0000:00:13.0
00:09:02.035  Controller supports FDP Attached to 0000:00:13.0
00:09:02.035  Namespace ID: 1 Endurance Group ID: 1
00:09:02.035  Initialization complete.
00:09:02.035  
00:09:02.035  ==================================
00:09:02.035  == FDP tests for Namespace: #01 ==
00:09:02.035  ==================================
00:09:02.035  
00:09:02.035  Get Feature: FDP:
00:09:02.035  =================
00:09:02.035    Enabled:                 Yes
00:09:02.035    FDP configuration Index: 0
00:09:02.035  
00:09:02.035  FDP configurations log page
00:09:02.035  ===========================
00:09:02.035  Number of FDP configurations:         1
00:09:02.035  Version:                              0
00:09:02.035  Size:                                 112
00:09:02.035  FDP Configuration Descriptor:         0
00:09:02.035    Descriptor Size:                    96
00:09:02.035    Reclaim Group Identifier format:    2
00:09:02.035    FDP Volatile Write Cache:           Not Present
00:09:02.035    FDP Configuration:                  Valid
00:09:02.035    Vendor Specific Size:               0
00:09:02.035    Number of Reclaim Groups:           2
00:09:02.035    Number of Recalim Unit Handles:     8
00:09:02.035    Max Placement Identifiers:          128
00:09:02.035    Number of Namespaces Suppprted:     256
00:09:02.035    Reclaim unit Nominal Size:          6000000 bytes
00:09:02.035    Estimated Reclaim Unit Time Limit:  Not Reported
00:09:02.035      RUH Desc #000:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #001:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #002:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #003:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #004:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #005:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #006:          RUH Type: Initially Isolated
00:09:02.035      RUH Desc #007:          RUH Type: Initially Isolated
00:09:02.035  
00:09:02.035  FDP reclaim unit handle usage log page
00:09:02.035  ======================================
00:09:02.035  Number of Reclaim Unit Handles:       8
00:09:02.035    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:09:02.035    RUH Usage Desc #001:   RUH Attributes: Unused
00:09:02.035    RUH Usage Desc #002:   RUH Attributes: Unused
00:09:02.035    RUH Usage Desc #003:   RUH Attributes: Unused
00:09:02.035    RUH Usage Desc #004:   RUH Attributes: Unused
00:09:02.035    RUH Usage Desc #005:   RUH Attributes: Unused
00:09:02.035    RUH Usage Desc #006:   RUH Attributes: Unused
00:09:02.035    RUH Usage Desc #007:   RUH Attributes: Unused
00:09:02.035  
00:09:02.035  FDP statistics log page
00:09:02.035  =======================
00:09:02.035  Host bytes with metadata written:  1019117568
00:09:02.035  Media bytes with metadata written: 1019203584
00:09:02.035  Media bytes erased:                0
00:09:02.035  
00:09:02.035  FDP Reclaim unit handle status
00:09:02.035  ==============================
00:09:02.035  Number of RUHS descriptors:   2
00:09:02.035  RUHS Desc: #0000  PID: 0x0000  RUHID: 0x0000  ERUT: 0x00000000  RUAMW: 0x0000000000005418
00:09:02.035  RUHS Desc: #0001  PID: 0x4000  RUHID: 0x0000  ERUT: 0x00000000  RUAMW: 0x0000000000006000
00:09:02.035  
00:09:02.035  FDP write on placement id: 0 success
00:09:02.035  
00:09:02.035  Set Feature: Enabling FDP events on Placement handle: #0 Success
00:09:02.035  
00:09:02.035  IO mgmt send: RUH update for Placement ID: #0 Success
00:09:02.035  
00:09:02.035  Get Feature: FDP Events for Placement handle: #0
00:09:02.035  ========================
00:09:02.035  Number of FDP Events: 6
00:09:02.035  FDP Event: #0  Type: RU Not Written to Capacity     Enabled: Yes
00:09:02.035  FDP Event: #1  Type: RU Time Limit Exceeded         Enabled: Yes
00:09:02.035  FDP Event: #2  Type: Ctrlr Reset Modified RUH's     Enabled: Yes
00:09:02.035  FDP Event: #3  Type: Invalid Placement Identifier   Enabled: Yes
00:09:02.035  FDP Event: #4  Type: Media Reallocated              Enabled: No
00:09:02.035  FDP Event: #5  Type: Implicitly modified RUH        Enabled: No
00:09:02.035  
00:09:02.035  FDP events log page
00:09:02.035  ===================
00:09:02.035  Number of FDP events: 1
00:09:02.035  FDP Event #0:
00:09:02.035    Event Type:                      RU Not Written to Capacity
00:09:02.035    Placement Identifier:            Valid
00:09:02.035    NSID:                            Valid
00:09:02.035    Location:                        Valid
00:09:02.035    Placement Identifier:            0
00:09:02.035    Event Timestamp:                 7
00:09:02.035    Namespace Identifier:            1
00:09:02.035    Reclaim Group Identifier:        0
00:09:02.035    Reclaim Unit Handle Identifier:  0
00:09:02.035  
00:09:02.035  FDP test passed
00:09:02.035  
00:09:02.035  real	0m0.240s
00:09:02.035  user	0m0.073s
00:09:02.035  sys	0m0.066s
00:09:02.035  ************************************
00:09:02.035  END TEST nvme_flexible_data_placement
00:09:02.035  ************************************
00:09:02.035   16:58:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:02.035   16:58:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x
00:09:02.035  
00:09:02.035  real	0m7.715s
00:09:02.035  user	0m1.131s
00:09:02.035  sys	0m1.376s
00:09:02.035   16:58:24 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:02.035  ************************************
00:09:02.035  END TEST nvme_fdp
00:09:02.035  ************************************
00:09:02.035   16:58:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x
00:09:02.035   16:58:25  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:09:02.035   16:58:25  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:09:02.035   16:58:25  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:02.035   16:58:25  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:02.035   16:58:25  -- common/autotest_common.sh@10 -- # set +x
00:09:02.035  ************************************
00:09:02.035  START TEST nvme_rpc
00:09:02.035  ************************************
00:09:02.035   16:58:25 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:09:02.298  * Looking for test storage...
00:09:02.298  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:09:02.298    16:58:25 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:02.298     16:58:25 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:09:02.298     16:58:25 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:02.298    16:58:25 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:02.298    16:58:25 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:02.298    16:58:25 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:02.298    16:58:25 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:02.298    16:58:25 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:02.298    16:58:25 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:02.298    16:58:25 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:02.299     16:58:25 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:02.299    16:58:25 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:02.299  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.299  		--rc genhtml_branch_coverage=1
00:09:02.299  		--rc genhtml_function_coverage=1
00:09:02.299  		--rc genhtml_legend=1
00:09:02.299  		--rc geninfo_all_blocks=1
00:09:02.299  		--rc geninfo_unexecuted_blocks=1
00:09:02.299  		
00:09:02.299  		'
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:02.299  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.299  		--rc genhtml_branch_coverage=1
00:09:02.299  		--rc genhtml_function_coverage=1
00:09:02.299  		--rc genhtml_legend=1
00:09:02.299  		--rc geninfo_all_blocks=1
00:09:02.299  		--rc geninfo_unexecuted_blocks=1
00:09:02.299  		
00:09:02.299  		'
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:02.299  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.299  		--rc genhtml_branch_coverage=1
00:09:02.299  		--rc genhtml_function_coverage=1
00:09:02.299  		--rc genhtml_legend=1
00:09:02.299  		--rc geninfo_all_blocks=1
00:09:02.299  		--rc geninfo_unexecuted_blocks=1
00:09:02.299  		
00:09:02.299  		'
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:02.299  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.299  		--rc genhtml_branch_coverage=1
00:09:02.299  		--rc genhtml_function_coverage=1
00:09:02.299  		--rc genhtml_legend=1
00:09:02.299  		--rc geninfo_all_blocks=1
00:09:02.299  		--rc geninfo_unexecuted_blocks=1
00:09:02.299  		
00:09:02.299  		'
00:09:02.299   16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:02.299    16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:09:02.299     16:58:25 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:09:02.299     16:58:25 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:09:02.299     16:58:25 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:09:02.299     16:58:25 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:09:02.299      16:58:25 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:09:02.299      16:58:25 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:09:02.299     16:58:25 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:09:02.299     16:58:25 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:09:02.299    16:58:25 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:09:02.299   16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:09:02.299   16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67007
00:09:02.299   16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:09:02.299   16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67007
00:09:02.299   16:58:25 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67007 ']'
00:09:02.299   16:58:25 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:02.299   16:58:25 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:02.299  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:02.299   16:58:25 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:02.299   16:58:25 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:02.299   16:58:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:02.299   16:58:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:09:02.299  [2024-12-09 16:58:25.308173] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:09:02.299  [2024-12-09 16:58:25.308296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67007 ]
00:09:02.561  [2024-12-09 16:58:25.472332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:02.561  [2024-12-09 16:58:25.576627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:02.561  [2024-12-09 16:58:25.576737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:03.130   16:58:26 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:03.130   16:58:26 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:03.130   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:09:03.390  Nvme0n1
00:09:03.390   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:09:03.390   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:09:03.651  request:
00:09:03.651  {
00:09:03.651    "bdev_name": "Nvme0n1",
00:09:03.651    "filename": "non_existing_file",
00:09:03.651    "method": "bdev_nvme_apply_firmware",
00:09:03.651    "req_id": 1
00:09:03.651  }
00:09:03.651  Got JSON-RPC error response
00:09:03.651  response:
00:09:03.651  {
00:09:03.651    "code": -32603,
00:09:03.651    "message": "open file failed."
00:09:03.651  }
00:09:03.651   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:09:03.651   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:09:03.651   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:09:03.911   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:09:03.911   16:58:26 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67007
00:09:03.911   16:58:26 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67007 ']'
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67007
00:09:03.912    16:58:26 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:03.912    16:58:26 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67007
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67007'
00:09:03.912  killing process with pid 67007
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67007
00:09:03.912   16:58:26 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67007
00:09:05.293  
00:09:05.293  real	0m3.280s
00:09:05.293  user	0m6.230s
00:09:05.293  sys	0m0.486s
00:09:05.293   16:58:28 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:05.293  ************************************
00:09:05.293   16:58:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:05.293  END TEST nvme_rpc
00:09:05.293  ************************************
00:09:05.554   16:58:28  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:09:05.554   16:58:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:05.554   16:58:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:05.554   16:58:28  -- common/autotest_common.sh@10 -- # set +x
00:09:05.554  ************************************
00:09:05.554  START TEST nvme_rpc_timeouts
00:09:05.554  ************************************
00:09:05.554   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:09:05.554  * Looking for test storage...
00:09:05.554  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:09:05.554    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:05.554     16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:05.554     16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version
00:09:05.554    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:05.554     16:58:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:05.554    16:58:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:09:05.554    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:05.554    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:05.554  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:05.554  		--rc genhtml_branch_coverage=1
00:09:05.554  		--rc genhtml_function_coverage=1
00:09:05.554  		--rc genhtml_legend=1
00:09:05.554  		--rc geninfo_all_blocks=1
00:09:05.554  		--rc geninfo_unexecuted_blocks=1
00:09:05.554  		
00:09:05.554  		'
00:09:05.555    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:05.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:05.555  		--rc genhtml_branch_coverage=1
00:09:05.555  		--rc genhtml_function_coverage=1
00:09:05.555  		--rc genhtml_legend=1
00:09:05.555  		--rc geninfo_all_blocks=1
00:09:05.555  		--rc geninfo_unexecuted_blocks=1
00:09:05.555  		
00:09:05.555  		'
00:09:05.555    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:05.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:05.555  		--rc genhtml_branch_coverage=1
00:09:05.555  		--rc genhtml_function_coverage=1
00:09:05.555  		--rc genhtml_legend=1
00:09:05.555  		--rc geninfo_all_blocks=1
00:09:05.555  		--rc geninfo_unexecuted_blocks=1
00:09:05.555  		
00:09:05.555  		'
00:09:05.555    16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:05.555  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:05.555  		--rc genhtml_branch_coverage=1
00:09:05.555  		--rc genhtml_function_coverage=1
00:09:05.555  		--rc genhtml_legend=1
00:09:05.555  		--rc geninfo_all_blocks=1
00:09:05.555  		--rc geninfo_unexecuted_blocks=1
00:09:05.555  		
00:09:05.555  		'
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67072
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67072
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67104
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67104
00:09:05.555   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67104 ']'
00:09:05.555   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:05.555   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:05.555  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:05.555   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:05.555   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:05.555   16:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:09:05.555   16:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:09:05.555  [2024-12-09 16:58:28.580412] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:09:05.555  [2024-12-09 16:58:28.580523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67104 ]
00:09:05.815  [2024-12-09 16:58:28.740879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:05.815  [2024-12-09 16:58:28.841984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:05.815  [2024-12-09 16:58:28.842116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:06.757   16:58:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:06.757   16:58:29 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:09:06.757  Checking default timeout settings:
00:09:06.757   16:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:09:06.757   16:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:09:06.757  Making settings changes with rpc:
00:09:06.757   16:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:09:06.757   16:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:09:07.017  Check default vs. modified settings:
00:09:07.017   16:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:09:07.017   16:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:09:07.278   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:09:07.278   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:09:07.278    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67072
00:09:07.278    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:09:07.278    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:09:07.278   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:09:07.278    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67072
00:09:07.278    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:09:07.278    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:09:07.539  Setting action_on_timeout is changed as expected.
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67072
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67072
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:09:07.539  Setting timeout_us is changed as expected.
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67072
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67072
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:09:07.539    16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:09:07.539  Setting timeout_admin_us is changed as expected.
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67072 /tmp/settings_modified_67072
00:09:07.539   16:58:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67104
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67104 ']'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67104
00:09:07.539    16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:07.539    16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67104
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:07.539  killing process with pid 67104
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67104'
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67104
00:09:07.539   16:58:30 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67104
00:09:08.936  RPC TIMEOUT SETTING TEST PASSED.
00:09:08.936   16:58:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:09:08.936  
00:09:08.936  real	0m3.548s
00:09:08.936  user	0m6.887s
00:09:08.936  sys	0m0.489s
00:09:08.936   16:58:31 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:08.936   16:58:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:09:08.936  ************************************
00:09:08.936  END TEST nvme_rpc_timeouts
00:09:08.936  ************************************
00:09:08.936    16:58:31  -- spdk/autotest.sh@239 -- # uname -s
00:09:08.936   16:58:31  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:09:08.936   16:58:31  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:09:08.936   16:58:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:08.936   16:58:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:08.936   16:58:31  -- common/autotest_common.sh@10 -- # set +x
00:09:08.936  ************************************
00:09:08.936  START TEST sw_hotplug
00:09:08.936  ************************************
00:09:08.936   16:58:31 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:09:09.194  * Looking for test storage...
00:09:09.194  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:09:09.194    16:58:32 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:09.194     16:58:32 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:09.194     16:58:32 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version
00:09:09.194    16:58:32 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:09:09.194    16:58:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:09.195     16:58:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:09.195    16:58:32 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:09:09.195    16:58:32 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:09.195    16:58:32 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:09.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.195  		--rc genhtml_branch_coverage=1
00:09:09.195  		--rc genhtml_function_coverage=1
00:09:09.195  		--rc genhtml_legend=1
00:09:09.195  		--rc geninfo_all_blocks=1
00:09:09.195  		--rc geninfo_unexecuted_blocks=1
00:09:09.195  		
00:09:09.195  		'
00:09:09.195    16:58:32 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:09.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.195  		--rc genhtml_branch_coverage=1
00:09:09.195  		--rc genhtml_function_coverage=1
00:09:09.195  		--rc genhtml_legend=1
00:09:09.195  		--rc geninfo_all_blocks=1
00:09:09.195  		--rc geninfo_unexecuted_blocks=1
00:09:09.195  		
00:09:09.195  		'
00:09:09.195    16:58:32 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:09.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.195  		--rc genhtml_branch_coverage=1
00:09:09.195  		--rc genhtml_function_coverage=1
00:09:09.195  		--rc genhtml_legend=1
00:09:09.195  		--rc geninfo_all_blocks=1
00:09:09.195  		--rc geninfo_unexecuted_blocks=1
00:09:09.195  		
00:09:09.195  		'
00:09:09.195    16:58:32 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:09.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.195  		--rc genhtml_branch_coverage=1
00:09:09.195  		--rc genhtml_function_coverage=1
00:09:09.195  		--rc genhtml_legend=1
00:09:09.195  		--rc geninfo_all_blocks=1
00:09:09.195  		--rc geninfo_unexecuted_blocks=1
00:09:09.195  		
00:09:09.195  		'
00:09:09.195   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:09:09.454  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:09.454  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:09:09.454  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:09:09.454  0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver
00:09:09.454  0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver
00:09:09.713   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:09:09.713   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:09:09.713   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:09:09.713    16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@233 -- # local class
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:09:09.713       16:58:32 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:09:09.713       16:58:32 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:09:09.713       16:58:32 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:09:09.713      16:58:32 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@18 -- # local i
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@18 -- # local i
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@18 -- # local i
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@18 -- # local i
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]]
00:09:09.713     16:58:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@328 -- # (( 4 ))
00:09:09.713    16:58:32 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:09:09.713   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2
00:09:09.713   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:09:09.713   16:58:32 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:09:09.971  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:09.972  Waiting for block devices as requested
00:09:09.972  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:09:10.230  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:09:10.230  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:09:10.230  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:09:15.497  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:09:15.497   16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0'
00:09:15.497   16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:09:15.755  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:09:15.755  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:15.755  0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0
00:09:16.014  0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0
00:09:16.272  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:09:16.272  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:09:16.272   16:58:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67963
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:09:16.272   16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning
00:09:16.272    16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:09:16.272    16:58:39 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:09:16.272    16:58:39 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:09:16.272    16:58:39 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:09:16.272    16:58:39 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:09:16.272     16:58:39 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:09:16.272     16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:09:16.272     16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:09:16.272     16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:09:16.272     16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:09:16.272     16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:09:16.535  Initializing NVMe Controllers
00:09:16.535  Attaching to 0000:00:10.0
00:09:16.535  Attaching to 0000:00:11.0
00:09:16.535  Attached to 0000:00:10.0
00:09:16.535  Attached to 0000:00:11.0
00:09:16.535  Initialization complete. Starting I/O...
00:09:16.535  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:09:16.535  QEMU NVMe Ctrl       (12341               ):          0 I/Os completed (+0)
00:09:16.535  
00:09:17.469  QEMU NVMe Ctrl       (12340               ):       2547 I/Os completed (+2547)
00:09:17.469  QEMU NVMe Ctrl       (12341               ):       2639 I/Os completed (+2639)
00:09:17.469  
00:09:18.446  QEMU NVMe Ctrl       (12340               ):       6101 I/Os completed (+3554)
00:09:18.446  QEMU NVMe Ctrl       (12341               ):       6156 I/Os completed (+3517)
00:09:18.446  
00:09:19.823  QEMU NVMe Ctrl       (12340               ):       9811 I/Os completed (+3710)
00:09:19.823  QEMU NVMe Ctrl       (12341               ):       9835 I/Os completed (+3679)
00:09:19.823  
00:09:20.390  QEMU NVMe Ctrl       (12340               ):      13398 I/Os completed (+3587)
00:09:20.390  QEMU NVMe Ctrl       (12341               ):      13431 I/Os completed (+3596)
00:09:20.390  
00:09:21.763  QEMU NVMe Ctrl       (12340               ):      17039 I/Os completed (+3641)
00:09:21.763  QEMU NVMe Ctrl       (12341               ):      17037 I/Os completed (+3606)
00:09:21.763  
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:22.330  [2024-12-09 16:58:45.227722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:22.330  Controller removed: QEMU NVMe Ctrl       (12340               )
00:09:22.330  [2024-12-09 16:58:45.228746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.228795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.228812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.228828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:09:22.330  [2024-12-09 16:58:45.230576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.230625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.230642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.230655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:22.330  [2024-12-09 16:58:45.247228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:22.330  Controller removed: QEMU NVMe Ctrl       (12341               )
00:09:22.330  [2024-12-09 16:58:45.248191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.248229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.248249] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.248265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:09:22.330  [2024-12-09 16:58:45.249693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.249728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.249741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  [2024-12-09 16:58:45.249753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:22.330  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:09:22.330  EAL: Scan for (pci) bus failed.
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:22.330     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:22.587  Attaching to 0000:00:10.0
00:09:22.587  Attached to 0000:00:10.0
00:09:22.587  QEMU NVMe Ctrl       (12340               ):         20 I/Os completed (+20)
00:09:22.587  
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:22.587     16:58:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:09:22.587  Attaching to 0000:00:11.0
00:09:22.587  Attached to 0000:00:11.0
00:09:23.520  QEMU NVMe Ctrl       (12340               ):       3868 I/Os completed (+3848)
00:09:23.520  QEMU NVMe Ctrl       (12341               ):       3745 I/Os completed (+3745)
00:09:23.520  
00:09:24.453  QEMU NVMe Ctrl       (12340               ):       8389 I/Os completed (+4521)
00:09:24.453  QEMU NVMe Ctrl       (12341               ):       8110 I/Os completed (+4365)
00:09:24.453  
00:09:25.387  QEMU NVMe Ctrl       (12340               ):      12150 I/Os completed (+3761)
00:09:25.387  QEMU NVMe Ctrl       (12341               ):      11658 I/Os completed (+3548)
00:09:25.387  
00:09:26.759  QEMU NVMe Ctrl       (12340               ):      16230 I/Os completed (+4080)
00:09:26.759  QEMU NVMe Ctrl       (12341               ):      15253 I/Os completed (+3595)
00:09:26.759  
00:09:27.691  QEMU NVMe Ctrl       (12340               ):      19743 I/Os completed (+3513)
00:09:27.691  QEMU NVMe Ctrl       (12341               ):      18754 I/Os completed (+3501)
00:09:27.691  
00:09:28.624  QEMU NVMe Ctrl       (12340               ):      24311 I/Os completed (+4568)
00:09:28.624  QEMU NVMe Ctrl       (12341               ):      24054 I/Os completed (+5300)
00:09:28.624  
00:09:29.557  QEMU NVMe Ctrl       (12340               ):      28140 I/Os completed (+3829)
00:09:29.557  QEMU NVMe Ctrl       (12341               ):      28453 I/Os completed (+4399)
00:09:29.557  
00:09:30.491  QEMU NVMe Ctrl       (12340               ):      31840 I/Os completed (+3700)
00:09:30.491  QEMU NVMe Ctrl       (12341               ):      32131 I/Os completed (+3678)
00:09:30.491  
00:09:31.427  QEMU NVMe Ctrl       (12340               ):      35653 I/Os completed (+3813)
00:09:31.427  QEMU NVMe Ctrl       (12341               ):      35742 I/Os completed (+3611)
00:09:31.427  
00:09:32.799  QEMU NVMe Ctrl       (12340               ):      39697 I/Os completed (+4044)
00:09:32.799  QEMU NVMe Ctrl       (12341               ):      39653 I/Os completed (+3911)
00:09:32.799  
00:09:33.733  QEMU NVMe Ctrl       (12340               ):      43035 I/Os completed (+3338)
00:09:33.733  QEMU NVMe Ctrl       (12341               ):      43036 I/Os completed (+3383)
00:09:33.733  
00:09:34.666  QEMU NVMe Ctrl       (12340               ):      46925 I/Os completed (+3890)
00:09:34.666  QEMU NVMe Ctrl       (12341               ):      46684 I/Os completed (+3648)
00:09:34.666  
00:09:34.666     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:09:34.666     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:34.666     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:34.666     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:34.666  [2024-12-09 16:58:57.488885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:34.666  Controller removed: QEMU NVMe Ctrl       (12340               )
00:09:34.666  [2024-12-09 16:58:57.490800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.490934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.490953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.490970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:09:34.666  [2024-12-09 16:58:57.492641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.492726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.492742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.492758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:34.666     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:34.666  [2024-12-09 16:58:57.510612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:34.666  Controller removed: QEMU NVMe Ctrl       (12341               )
00:09:34.666  [2024-12-09 16:58:57.511637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.511669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.511686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.511699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:09:34.666  [2024-12-09 16:58:57.513090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.666  [2024-12-09 16:58:57.513119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.667  [2024-12-09 16:58:57.513131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.667  [2024-12-09 16:58:57.513144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:34.667  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:09:34.667  EAL: Scan for (pci) bus failed.
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:34.667     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:34.667  Attaching to 0000:00:10.0
00:09:34.667  Attached to 0000:00:10.0
00:09:34.924     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:34.924     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:34.924     16:58:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:09:34.924  Attaching to 0000:00:11.0
00:09:34.924  Attached to 0000:00:11.0
00:09:35.490  QEMU NVMe Ctrl       (12340               ):       2732 I/Os completed (+2732)
00:09:35.490  QEMU NVMe Ctrl       (12341               ):       2429 I/Os completed (+2429)
00:09:35.490  
00:09:36.422  QEMU NVMe Ctrl       (12340               ):       6274 I/Os completed (+3542)
00:09:36.422  QEMU NVMe Ctrl       (12341               ):       6036 I/Os completed (+3607)
00:09:36.422  
00:09:37.795  QEMU NVMe Ctrl       (12340               ):       9889 I/Os completed (+3615)
00:09:37.795  QEMU NVMe Ctrl       (12341               ):       9734 I/Os completed (+3698)
00:09:37.795  
00:09:38.727  QEMU NVMe Ctrl       (12340               ):      13478 I/Os completed (+3589)
00:09:38.727  QEMU NVMe Ctrl       (12341               ):      13244 I/Os completed (+3510)
00:09:38.727  
00:09:39.660  QEMU NVMe Ctrl       (12340               ):      17952 I/Os completed (+4474)
00:09:39.660  QEMU NVMe Ctrl       (12341               ):      17947 I/Os completed (+4703)
00:09:39.660  
00:09:40.678  QEMU NVMe Ctrl       (12340               ):      22855 I/Os completed (+4903)
00:09:40.678  QEMU NVMe Ctrl       (12341               ):      23725 I/Os completed (+5778)
00:09:40.678  
00:09:41.611  QEMU NVMe Ctrl       (12340               ):      27387 I/Os completed (+4532)
00:09:41.611  QEMU NVMe Ctrl       (12341               ):      29084 I/Os completed (+5359)
00:09:41.611  
00:09:42.545  QEMU NVMe Ctrl       (12340               ):      31003 I/Os completed (+3616)
00:09:42.545  QEMU NVMe Ctrl       (12341               ):      32989 I/Os completed (+3905)
00:09:42.545  
00:09:43.477  QEMU NVMe Ctrl       (12340               ):      34913 I/Os completed (+3910)
00:09:43.477  QEMU NVMe Ctrl       (12341               ):      37105 I/Os completed (+4116)
00:09:43.477  
00:09:44.410  QEMU NVMe Ctrl       (12340               ):      38502 I/Os completed (+3589)
00:09:44.410  QEMU NVMe Ctrl       (12341               ):      40801 I/Os completed (+3696)
00:09:44.410  
00:09:45.782  QEMU NVMe Ctrl       (12340               ):      42399 I/Os completed (+3897)
00:09:45.782  QEMU NVMe Ctrl       (12341               ):      44406 I/Os completed (+3605)
00:09:45.782  
00:09:46.716  QEMU NVMe Ctrl       (12340               ):      45993 I/Os completed (+3594)
00:09:46.716  QEMU NVMe Ctrl       (12341               ):      47973 I/Os completed (+3567)
00:09:46.716  
00:09:46.716     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:09:46.716     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:46.716     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:46.716     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:46.716  [2024-12-09 16:59:09.746738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:09:46.716  Controller removed: QEMU NVMe Ctrl       (12340               )
00:09:46.716  [2024-12-09 16:59:09.747880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  [2024-12-09 16:59:09.748000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  [2024-12-09 16:59:09.748031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  [2024-12-09 16:59:09.748091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:09:46.716  [2024-12-09 16:59:09.749812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  [2024-12-09 16:59:09.749917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  [2024-12-09 16:59:09.749945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.716  [2024-12-09 16:59:09.750003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:09:46.974  [2024-12-09 16:59:09.767154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:09:46.974  Controller removed: QEMU NVMe Ctrl       (12341               )
00:09:46.974  [2024-12-09 16:59:09.768185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  [2024-12-09 16:59:09.768288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  [2024-12-09 16:59:09.768322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  [2024-12-09 16:59:09.768381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:09:46.974  [2024-12-09 16:59:09.769991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  [2024-12-09 16:59:09.770082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  [2024-12-09 16:59:09.770113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  [2024-12-09 16:59:09.770164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:09:46.974  EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:11.0/device
00:09:46.974  EAL: Scan for (pci) bus failed.
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:09:46.974  Attaching to 0000:00:10.0
00:09:46.974  Attached to 0000:00:10.0
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:09:46.974     16:59:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:09:46.974     16:59:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:09:46.974  Attaching to 0000:00:11.0
00:09:46.974  Attached to 0000:00:11.0
00:09:46.974  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:09:46.974  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:09:46.974  [2024-12-09 16:59:10.009682] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:09:59.177     16:59:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:09:59.177     16:59:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:09:59.177    16:59:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.78
00:09:59.177    16:59:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.78
00:09:59.177    16:59:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:09:59.177   16:59:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.78
00:09:59.177   16:59:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.78 2
00:09:59.177  remove_attach_helper took 42.78s to complete (handling 2 nvme drive(s)) 16:59:22 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67963
00:10:05.805  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67963) - No such process
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67963
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68517
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:10:05.805   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68517
00:10:05.805   16:59:28 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68517 ']'
00:10:05.805   16:59:28 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:05.805   16:59:28 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:05.805  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:05.805   16:59:28 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:05.805   16:59:28 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:05.805   16:59:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:05.805  [2024-12-09 16:59:28.098642] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:10:05.805  [2024-12-09 16:59:28.098766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68517 ]
00:10:05.805  [2024-12-09 16:59:28.254481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.805  [2024-12-09 16:59:28.363757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:06.063   16:59:28 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:06.063   16:59:28 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:10:06.063   16:59:28 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:10:06.063   16:59:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:06.063   16:59:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:06.063   16:59:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:06.063   16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:10:06.063   16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:10:06.063    16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:10:06.063    16:59:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:10:06.063    16:59:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:10:06.063    16:59:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:10:06.063    16:59:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:10:06.063     16:59:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:10:06.063     16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:10:06.063     16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:10:06.063     16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:10:06.063     16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:10:06.063     16:59:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:10:12.619     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:12.619      16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:12.619      16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:12.620       16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:12.620      16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:12.620       16:59:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:12.620       16:59:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:12.620       16:59:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:12.620     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:10:12.620     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:12.620  [2024-12-09 16:59:35.105192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:10:12.620  [2024-12-09 16:59:35.106501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.106540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620  [2024-12-09 16:59:35.106575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.106584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.106592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620  [2024-12-09 16:59:35.106600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.106609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.106615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620  [2024-12-09 16:59:35.106627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.106633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.106641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:10:12.620     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:12.620      16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:12.620      16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:12.620      16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:12.620       16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:12.620       16:59:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:12.620       16:59:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:12.620  [2024-12-09 16:59:35.605189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:10:12.620  [2024-12-09 16:59:35.606602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.606716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.606734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620  [2024-12-09 16:59:35.606754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.606763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.606770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620  [2024-12-09 16:59:35.606780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.606786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.606794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620  [2024-12-09 16:59:35.606802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:12.620  [2024-12-09 16:59:35.606810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:12.620  [2024-12-09 16:59:35.606816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:12.620       16:59:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:12.620     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:10:12.620     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:12.879     16:59:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:10:25.073      16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:10:25.073      16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:25.073      16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:25.073       16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:25.073       16:59:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:25.073       16:59:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:25.073       16:59:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:25.073      16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:25.073      16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:25.073       16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:25.073      16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:25.073       16:59:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:25.073       16:59:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:25.073       16:59:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:10:25.073     16:59:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:25.073  [2024-12-09 16:59:48.005362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:10:25.073  [2024-12-09 16:59:48.006790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.073  [2024-12-09 16:59:48.006832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.073  [2024-12-09 16:59:48.006854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.073  [2024-12-09 16:59:48.006876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.073  [2024-12-09 16:59:48.006885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.073  [2024-12-09 16:59:48.006895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.073  [2024-12-09 16:59:48.006903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.073  [2024-12-09 16:59:48.006911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.073  [2024-12-09 16:59:48.006918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.073  [2024-12-09 16:59:48.006928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.073  [2024-12-09 16:59:48.006934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.073  [2024-12-09 16:59:48.006944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.640     16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:10:25.640     16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:25.640      16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:25.640      16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:25.640      16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:25.640       16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:25.640       16:59:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:25.640       16:59:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:25.640       16:59:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:25.640     16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:10:25.640     16:59:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:25.640  [2024-12-09 16:59:48.605369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:10:25.640  [2024-12-09 16:59:48.606731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.640  [2024-12-09 16:59:48.606767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.640  [2024-12-09 16:59:48.606784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.640  [2024-12-09 16:59:48.606802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.640  [2024-12-09 16:59:48.606812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.640  [2024-12-09 16:59:48.606819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.640  [2024-12-09 16:59:48.606829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.640  [2024-12-09 16:59:48.606836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.640  [2024-12-09 16:59:48.606857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:25.640  [2024-12-09 16:59:48.606865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:25.640  [2024-12-09 16:59:48.606874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:25.640  [2024-12-09 16:59:48.606880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:26.206      16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:26.206       16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:26.206       16:59:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:26.206       16:59:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:26.206      16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:26.206      16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:26.206       16:59:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:26.206     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:10:26.465     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:10:26.465     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:26.465     16:59:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:10:38.726      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:10:38.726      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:38.726       17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:38.726      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:38.726       17:00:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:38.726       17:00:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:38.726       17:00:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:38.726      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:38.726      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:38.726       17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:38.726       17:00:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:38.726       17:00:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:38.726      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:38.726       17:00:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:10:38.726     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:38.726  [2024-12-09 17:00:01.405554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:10:38.726  [2024-12-09 17:00:01.408095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.726  [2024-12-09 17:00:01.408215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.726  [2024-12-09 17:00:01.408281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.726  [2024-12-09 17:00:01.408361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.726  [2024-12-09 17:00:01.408381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.726  [2024-12-09 17:00:01.408410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.726  [2024-12-09 17:00:01.408468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.726  [2024-12-09 17:00:01.408614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.726  [2024-12-09 17:00:01.408700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.726  [2024-12-09 17:00:01.408730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.726  [2024-12-09 17:00:01.408746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.726  [2024-12-09 17:00:01.408772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.984     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:10:38.984     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:38.984      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:38.984      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:38.984       17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:38.984      17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:38.984       17:00:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:38.984       17:00:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:38.984  [2024-12-09 17:00:01.905549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:10:38.984  [2024-12-09 17:00:01.906971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.984  [2024-12-09 17:00:01.907080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.984  [2024-12-09 17:00:01.907175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.984  [2024-12-09 17:00:01.907344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.984  [2024-12-09 17:00:01.907367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.984  [2024-12-09 17:00:01.907392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.984  [2024-12-09 17:00:01.907419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.984  [2024-12-09 17:00:01.907435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.984  [2024-12-09 17:00:01.907496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.984  [2024-12-09 17:00:01.907530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:38.984  [2024-12-09 17:00:01.907549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:38.984  [2024-12-09 17:00:01.907573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:38.984       17:00:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:38.984     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:10:38.984     17:00:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:39.550     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:10:39.551     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:39.551      17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:39.551      17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:39.551      17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:39.551       17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:39.551       17:00:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:39.551       17:00:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:39.551       17:00:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:39.551     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:10:39.551     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:10:39.551     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:39.551     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:39.551     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:39.809     17:00:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:10:52.002      17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:10:52.002      17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:52.002      17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:52.002       17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:52.002       17:00:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:52.002       17:00:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:52.002       17:00:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:10:52.002   17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73
00:10:52.002   17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2
00:10:52.002  remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:10:52.002   17:00:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:52.002   17:00:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:52.002   17:00:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:52.002   17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:10:52.002   17:00:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:52.002   17:00:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:52.002   17:00:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:52.002   17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:10:52.002   17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:10:52.002    17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:10:52.002    17:00:14 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:10:52.002     17:00:14 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:10:52.002     17:00:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:58.558      17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:58.558      17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:58.558       17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:58.558      17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:58.558       17:00:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:58.558       17:00:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:58.558       17:00:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:10:58.558     17:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:58.558  [2024-12-09 17:00:20.861767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:10:58.558  [2024-12-09 17:00:20.863360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:20.863472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:20.863533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558  [2024-12-09 17:00:20.863573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:20.863591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:20.863651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558  [2024-12-09 17:00:20.863678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:20.863695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:20.863786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558  [2024-12-09 17:00:20.863818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:20.863836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:20.863878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:10:58.558     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:58.558      17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:58.558      17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:58.558      17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:58.558       17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:58.558       17:00:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:58.558       17:00:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:58.558  [2024-12-09 17:00:21.361760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:10:58.558  [2024-12-09 17:00:21.363127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:21.363239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:21.363258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558  [2024-12-09 17:00:21.363279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:21.363288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:21.363296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558  [2024-12-09 17:00:21.363305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:21.363313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:21.363321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558  [2024-12-09 17:00:21.363329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:10:58.558  [2024-12-09 17:00:21.363337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:10:58.558  [2024-12-09 17:00:21.363344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:10:58.558       17:00:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:58.558     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:10:58.558     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:10:59.124     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:10:59.124     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:10:59.124      17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:10:59.125      17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:10:59.125      17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:10:59.125       17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:10:59.125       17:00:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:59.125       17:00:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:10:59.125       17:00:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:59.125     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:10:59.125     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:10:59.125     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:59.125     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:59.125     17:00:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:10:59.125     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:10:59.125     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:59.125     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:10:59.125     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:10:59.125     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:10:59.125     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:10:59.382     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:10:59.382     17:00:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:11:11.593      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:11:11.593      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:11.593      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:11.593       17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:11.593       17:00:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.593       17:00:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:11.593       17:00:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:11:11.593      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:11:11.593      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:11.593      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:11.593       17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:11.593       17:00:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.593       17:00:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:11.593  [2024-12-09 17:00:34.261972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:11:11.593  [2024-12-09 17:00:34.263044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.593  [2024-12-09 17:00:34.263085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.593  [2024-12-09 17:00:34.263096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.593  [2024-12-09 17:00:34.263117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.593  [2024-12-09 17:00:34.263124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.593  [2024-12-09 17:00:34.263136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.593  [2024-12-09 17:00:34.263144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.593  [2024-12-09 17:00:34.263153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.593  [2024-12-09 17:00:34.263160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.593  [2024-12-09 17:00:34.263169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.593  [2024-12-09 17:00:34.263176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.593  [2024-12-09 17:00:34.263184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.593       17:00:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:11:11.593     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:11:11.856  [2024-12-09 17:00:34.662001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:11:11.856  [2024-12-09 17:00:34.663188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.856  [2024-12-09 17:00:34.663219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.856  [2024-12-09 17:00:34.663233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.856  [2024-12-09 17:00:34.663251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.856  [2024-12-09 17:00:34.663263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.856  [2024-12-09 17:00:34.663270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.856  [2024-12-09 17:00:34.663279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.856  [2024-12-09 17:00:34.663287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.856  [2024-12-09 17:00:34.663295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.856  [2024-12-09 17:00:34.663302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:11.856  [2024-12-09 17:00:34.663310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:11.856  [2024-12-09 17:00:34.663316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:11.856     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:11:11.856     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:11:11.856      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:11:11.856      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:11.856      17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:11.856       17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:11.856       17:00:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:11.856       17:00:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:11.856       17:00:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:11.856     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:11:11.856     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:11:12.117     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:11:12.117     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:11:12.117     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:11:12.117     17:00:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:11:12.117     17:00:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:11:24.358     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:11:24.358     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:11:24.358      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:11:24.358      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:24.358      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:24.358       17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:24.358       17:00:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:24.358       17:00:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:24.358       17:00:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:24.358     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:11:24.358     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:11:24.358     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:11:24.358     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:11:24.358  [2024-12-09 17:00:47.162179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:11:24.359  [2024-12-09 17:00:47.163521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.359  [2024-12-09 17:00:47.163560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.359  [2024-12-09 17:00:47.163572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.359  [2024-12-09 17:00:47.163593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.359  [2024-12-09 17:00:47.163601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.359  [2024-12-09 17:00:47.163610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.359  [2024-12-09 17:00:47.163618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.359  [2024-12-09 17:00:47.163629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.359  [2024-12-09 17:00:47.163636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.359  [2024-12-09 17:00:47.163645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.359  [2024-12-09 17:00:47.163651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.359  [2024-12-09 17:00:47.163660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.359     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:11:24.359     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:11:24.359     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:11:24.359     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:11:24.359      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:11:24.359      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:24.359      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:24.359       17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:24.359       17:00:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:24.359       17:00:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:24.359       17:00:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:24.359     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:11:24.359     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:11:24.620  [2024-12-09 17:00:47.562186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:11:24.620  [2024-12-09 17:00:47.563377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.620  [2024-12-09 17:00:47.563412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.620  [2024-12-09 17:00:47.563425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.620  [2024-12-09 17:00:47.563444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.620  [2024-12-09 17:00:47.563453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.620  [2024-12-09 17:00:47.563462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.620  [2024-12-09 17:00:47.563472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.620  [2024-12-09 17:00:47.563479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.620  [2024-12-09 17:00:47.563488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.620  [2024-12-09 17:00:47.563495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:24.620  [2024-12-09 17:00:47.563509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:11:24.620  [2024-12-09 17:00:47.563516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:11:24.882      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:11:24.882      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:24.882      17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:24.882       17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:24.882       17:00:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:24.882       17:00:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:24.882       17:00:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:11:24.882     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:11:25.143     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:11:25.143     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:11:25.143     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:11:25.143     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:11:25.143     17:00:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:11:25.143     17:00:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:11:25.143     17:00:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:11:25.143     17:00:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:11:37.380     17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:11:37.380     17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:11:37.380      17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:11:37.380      17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:11:37.380      17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:11:37.380       17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:11:37.380       17:01:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.380       17:01:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:37.380       17:01:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.380     17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:11:37.380     17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:11:37.380    17:01:00 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.33
00:11:37.380    17:01:00 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.33
00:11:37.380    17:01:00 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:11:37.380   17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.33
00:11:37.380   17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.33 2
00:11:37.380  remove_attach_helper took 45.33s to complete (handling 2 nvme drive(s)) 17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:11:37.380   17:01:00 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68517
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68517 ']'
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68517
00:11:37.380    17:01:00 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:37.380    17:01:00 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68517
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:37.380  killing process with pid 68517
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68517'
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68517
00:11:37.380   17:01:00 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68517
00:11:38.762   17:01:01 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:11:38.762  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:11:39.333  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:11:39.333  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:11:39.333  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:11:39.593  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:11:39.593  
00:11:39.593  real	2m30.496s
00:11:39.593  user	1m53.602s
00:11:39.593  sys	0m15.597s
00:11:39.593  ************************************
00:11:39.593  END TEST sw_hotplug
00:11:39.593  ************************************
00:11:39.593   17:01:02 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:39.593   17:01:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:11:39.593   17:01:02  -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]]
00:11:39.593   17:01:02  -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh
00:11:39.593   17:01:02  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:39.593   17:01:02  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:39.593   17:01:02  -- common/autotest_common.sh@10 -- # set +x
00:11:39.593  ************************************
00:11:39.593  START TEST nvme_xnvme
00:11:39.593  ************************************
00:11:39.593   17:01:02 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh
00:11:39.593  * Looking for test storage...
00:11:39.593  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:11:39.593     17:01:02 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:39.593      17:01:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version
00:11:39.593      17:01:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:39.857     17:01:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@345 -- # : 1
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=1
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 1
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=2
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:39.857      17:01:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 2
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:39.857     17:01:02 nvme_xnvme -- scripts/common.sh@368 -- # return 0
00:11:39.857     17:01:02 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:39.857     17:01:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:39.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.857  		--rc genhtml_branch_coverage=1
00:11:39.857  		--rc genhtml_function_coverage=1
00:11:39.857  		--rc genhtml_legend=1
00:11:39.857  		--rc geninfo_all_blocks=1
00:11:39.857  		--rc geninfo_unexecuted_blocks=1
00:11:39.857  		
00:11:39.857  		'
00:11:39.857     17:01:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:39.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.857  		--rc genhtml_branch_coverage=1
00:11:39.857  		--rc genhtml_function_coverage=1
00:11:39.857  		--rc genhtml_legend=1
00:11:39.857  		--rc geninfo_all_blocks=1
00:11:39.857  		--rc geninfo_unexecuted_blocks=1
00:11:39.857  		
00:11:39.857  		'
00:11:39.857     17:01:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:39.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.857  		--rc genhtml_branch_coverage=1
00:11:39.857  		--rc genhtml_function_coverage=1
00:11:39.857  		--rc genhtml_legend=1
00:11:39.857  		--rc geninfo_all_blocks=1
00:11:39.857  		--rc geninfo_unexecuted_blocks=1
00:11:39.857  		
00:11:39.857  		'
00:11:39.857     17:01:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:39.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.857  		--rc genhtml_branch_coverage=1
00:11:39.857  		--rc genhtml_function_coverage=1
00:11:39.857  		--rc genhtml_legend=1
00:11:39.857  		--rc geninfo_all_blocks=1
00:11:39.857  		--rc geninfo_unexecuted_blocks=1
00:11:39.857  		
00:11:39.857  		'
00:11:39.857    17:01:02 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh
00:11:39.857     17:01:02 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:11:39.857      17:01:02 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:11:39.857       17:01:02 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:11:39.858       17:01:02 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n
00:11:39.858      17:01:02 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:11:39.858         17:01:02 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:11:39.858        17:01:02 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:11:39.858  #define SPDK_CONFIG_H
00:11:39.858  #define SPDK_CONFIG_AIO_FSDEV 1
00:11:39.858  #define SPDK_CONFIG_APPS 1
00:11:39.858  #define SPDK_CONFIG_ARCH native
00:11:39.858  #define SPDK_CONFIG_ASAN 1
00:11:39.858  #undef SPDK_CONFIG_AVAHI
00:11:39.858  #undef SPDK_CONFIG_CET
00:11:39.858  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:11:39.858  #define SPDK_CONFIG_COVERAGE 1
00:11:39.858  #define SPDK_CONFIG_CROSS_PREFIX 
00:11:39.858  #undef SPDK_CONFIG_CRYPTO
00:11:39.858  #undef SPDK_CONFIG_CRYPTO_MLX5
00:11:39.858  #undef SPDK_CONFIG_CUSTOMOCF
00:11:39.858  #undef SPDK_CONFIG_DAOS
00:11:39.858  #define SPDK_CONFIG_DAOS_DIR 
00:11:39.858  #define SPDK_CONFIG_DEBUG 1
00:11:39.858  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:11:39.858  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:11:39.858  #define SPDK_CONFIG_DPDK_INC_DIR 
00:11:39.858  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:11:39.858  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:11:39.858  #undef SPDK_CONFIG_DPDK_UADK
00:11:39.858  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:11:39.858  #define SPDK_CONFIG_EXAMPLES 1
00:11:39.858  #undef SPDK_CONFIG_FC
00:11:39.858  #define SPDK_CONFIG_FC_PATH 
00:11:39.858  #define SPDK_CONFIG_FIO_PLUGIN 1
00:11:39.858  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:11:39.858  #define SPDK_CONFIG_FSDEV 1
00:11:39.858  #undef SPDK_CONFIG_FUSE
00:11:39.858  #undef SPDK_CONFIG_FUZZER
00:11:39.858  #define SPDK_CONFIG_FUZZER_LIB 
00:11:39.858  #undef SPDK_CONFIG_GOLANG
00:11:39.858  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:11:39.858  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:11:39.858  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:11:39.858  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:11:39.858  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:11:39.858  #undef SPDK_CONFIG_HAVE_LIBBSD
00:11:39.858  #undef SPDK_CONFIG_HAVE_LZ4
00:11:39.858  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:11:39.858  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:11:39.858  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:11:39.858  #define SPDK_CONFIG_IDXD 1
00:11:39.858  #define SPDK_CONFIG_IDXD_KERNEL 1
00:11:39.858  #undef SPDK_CONFIG_IPSEC_MB
00:11:39.858  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:11:39.858  #define SPDK_CONFIG_ISAL 1
00:11:39.858  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:11:39.858  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:11:39.858  #define SPDK_CONFIG_LIBDIR 
00:11:39.858  #undef SPDK_CONFIG_LTO
00:11:39.858  #define SPDK_CONFIG_MAX_LCORES 128
00:11:39.858  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:11:39.858  #define SPDK_CONFIG_NVME_CUSE 1
00:11:39.858  #undef SPDK_CONFIG_OCF
00:11:39.858  #define SPDK_CONFIG_OCF_PATH 
00:11:39.858  #define SPDK_CONFIG_OPENSSL_PATH 
00:11:39.858  #undef SPDK_CONFIG_PGO_CAPTURE
00:11:39.858  #define SPDK_CONFIG_PGO_DIR 
00:11:39.858  #undef SPDK_CONFIG_PGO_USE
00:11:39.858  #define SPDK_CONFIG_PREFIX /usr/local
00:11:39.858  #undef SPDK_CONFIG_RAID5F
00:11:39.858  #undef SPDK_CONFIG_RBD
00:11:39.858  #define SPDK_CONFIG_RDMA 1
00:11:39.858  #define SPDK_CONFIG_RDMA_PROV verbs
00:11:39.858  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:11:39.858  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:11:39.858  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:11:39.858  #define SPDK_CONFIG_SHARED 1
00:11:39.858  #undef SPDK_CONFIG_SMA
00:11:39.858  #define SPDK_CONFIG_TESTS 1
00:11:39.858  #undef SPDK_CONFIG_TSAN
00:11:39.858  #define SPDK_CONFIG_UBLK 1
00:11:39.858  #define SPDK_CONFIG_UBSAN 1
00:11:39.858  #undef SPDK_CONFIG_UNIT_TESTS
00:11:39.858  #undef SPDK_CONFIG_URING
00:11:39.858  #define SPDK_CONFIG_URING_PATH 
00:11:39.858  #undef SPDK_CONFIG_URING_ZNS
00:11:39.858  #undef SPDK_CONFIG_USDT
00:11:39.858  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:11:39.858  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:11:39.858  #undef SPDK_CONFIG_VFIO_USER
00:11:39.858  #define SPDK_CONFIG_VFIO_USER_DIR 
00:11:39.858  #define SPDK_CONFIG_VHOST 1
00:11:39.858  #define SPDK_CONFIG_VIRTIO 1
00:11:39.858  #undef SPDK_CONFIG_VTUNE
00:11:39.858  #define SPDK_CONFIG_VTUNE_DIR 
00:11:39.858  #define SPDK_CONFIG_WERROR 1
00:11:39.858  #define SPDK_CONFIG_WPDK_DIR 
00:11:39.858  #define SPDK_CONFIG_XNVME 1
00:11:39.858  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:11:39.858       17:01:02 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:11:39.858      17:01:02 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:39.858       17:01:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob
00:11:39.858       17:01:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:39.858       17:01:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:39.858       17:01:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:39.859        17:01:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.859        17:01:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.859        17:01:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.859        17:01:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH
00:11:39.859        17:01:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:11:39.859         17:01:02 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:11:39.859        17:01:02 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:11:39.859        17:01:02 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:11:39.859        17:01:02 nvme_xnvme -- pm/common@68 -- # uname -s
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@76 -- # SUDO[0]=
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E'
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]]
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:11:39.859       17:01:02 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@70 -- # :
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@126 -- # :
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@140 -- # :
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@142 -- # : true
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0
00:11:39.859      17:01:02 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@154 -- # :
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@169 -- # :
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@206 -- # cat
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV=
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt=
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind=
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind=
00:11:39.860       17:01:02 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE=
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69886 ]]
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69886
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:11:39.860       17:01:02 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.lyhXkT
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.lyhXkT/tests/xnvme /tmp/spdk.lyhXkT
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.860       17:01:02 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T
00:11:39.860       17:01:02 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974167552
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593554944
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.860      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974167552
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593554944
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96493584384
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3209195520
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:11:39.861  * Looking for test storage...
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:11:39.861       17:01:02 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:11:39.861       17:01:02 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974167552
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]]
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]]
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]]
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:11:39.861  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1703 -- # true
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@27 -- # exec
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@29 -- # exec
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:39.861       17:01:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version
00:11:39.861       17:01:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:39.861      17:01:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@345 -- # : 1
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:39.861       17:01:02 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1
00:11:39.861       17:01:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=1
00:11:39.861       17:01:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:39.861       17:01:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 1
00:11:39.861      17:01:02 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:11:39.861       17:01:02 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2
00:11:39.862       17:01:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=2
00:11:39.862       17:01:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:39.862       17:01:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 2
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@368 -- # return 0
00:11:39.862      17:01:02 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:39.862      17:01:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:39.862  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.862  		--rc genhtml_branch_coverage=1
00:11:39.862  		--rc genhtml_function_coverage=1
00:11:39.862  		--rc genhtml_legend=1
00:11:39.862  		--rc geninfo_all_blocks=1
00:11:39.862  		--rc geninfo_unexecuted_blocks=1
00:11:39.862  		
00:11:39.862  		'
00:11:39.862      17:01:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:39.862  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.862  		--rc genhtml_branch_coverage=1
00:11:39.862  		--rc genhtml_function_coverage=1
00:11:39.862  		--rc genhtml_legend=1
00:11:39.862  		--rc geninfo_all_blocks=1
00:11:39.862  		--rc geninfo_unexecuted_blocks=1
00:11:39.862  		
00:11:39.862  		'
00:11:39.862      17:01:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:39.862  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.862  		--rc genhtml_branch_coverage=1
00:11:39.862  		--rc genhtml_function_coverage=1
00:11:39.862  		--rc genhtml_legend=1
00:11:39.862  		--rc geninfo_all_blocks=1
00:11:39.862  		--rc geninfo_unexecuted_blocks=1
00:11:39.862  		
00:11:39.862  		'
00:11:39.862      17:01:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:39.862  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.862  		--rc genhtml_branch_coverage=1
00:11:39.862  		--rc genhtml_function_coverage=1
00:11:39.862  		--rc genhtml_legend=1
00:11:39.862  		--rc geninfo_all_blocks=1
00:11:39.862  		--rc geninfo_unexecuted_blocks=1
00:11:39.862  		
00:11:39.862  		'
00:11:39.862     17:01:02 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:39.862      17:01:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:39.862       17:01:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.862       17:01:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.862       17:01:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.862       17:01:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH
00:11:39.862       17:01:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false')
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme
00:11:39.862    17:01:02 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:11:40.436  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:11:40.436  Waiting for block devices as requested
00:11:40.436  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:11:40.699  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:11:40.699  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:11:40.699  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:11:46.000  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:11:46.000    17:01:08 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme
00:11:46.261     17:01:09 nvme_xnvme -- xnvme/common.sh@74 -- # nproc
00:11:46.261    17:01:09 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*)
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1
00:11:46.522    17:01:09 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:11:46.522    17:01:09 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:11:46.522  No valid GPT data, bailing
00:11:46.522     17:01:09 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:11:46.522    17:01:09 nvme_xnvme -- scripts/common.sh@394 -- # pt=
00:11:46.522    17:01:09 nvme_xnvme -- scripts/common.sh@395 -- # return 1
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1
00:11:46.522    17:01:09 nvme_xnvme -- xnvme/common.sh@83 -- # return 0
00:11:46.522   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT
00:11:46.522   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:11:46.522   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:11:46.523   17:01:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:11:46.523   17:01:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:46.523   17:01:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:46.523   17:01:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:11:46.523  ************************************
00:11:46.523  START TEST xnvme_rpc
00:11:46.523  ************************************
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70276
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70276
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70276 ']'
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:46.523  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:46.523   17:01:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:11:46.784  [2024-12-09 17:01:09.629360] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:11:46.784  [2024-12-09 17:01:09.629511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70276 ]
00:11:46.784  [2024-12-09 17:01:09.792122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:47.044  [2024-12-09 17:01:09.941216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio ''
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:47.987  xnvme_bdev
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]]
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70276
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70276 ']'
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70276
00:11:47.987    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:11:47.987   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:47.988    17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70276
00:11:47.988   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:47.988   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:47.988  killing process with pid 70276
00:11:47.988   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70276'
00:11:47.988   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70276
00:11:47.988   17:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70276
00:11:49.899  
00:11:49.899  real	0m3.258s
00:11:49.899  user	0m3.128s
00:11:49.899  sys	0m0.590s
00:11:49.900   17:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:49.900  ************************************
00:11:49.900  END TEST xnvme_rpc
00:11:49.900  ************************************
00:11:49.900   17:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:11:49.900   17:01:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:11:49.900   17:01:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:49.900   17:01:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:49.900   17:01:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:11:49.900  ************************************
00:11:49.900  START TEST xnvme_bdevperf
00:11:49.900  ************************************
00:11:49.900   17:01:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:11:49.900   17:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:11:49.900   17:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio
00:11:49.900   17:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:11:49.900   17:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:11:49.900    17:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:11:49.900    17:01:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:11:49.900    17:01:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:11:49.900  {
00:11:49.900    "subsystems": [
00:11:49.900      {
00:11:49.900        "subsystem": "bdev",
00:11:49.900        "config": [
00:11:49.900          {
00:11:49.900            "params": {
00:11:49.900              "io_mechanism": "libaio",
00:11:49.900              "conserve_cpu": false,
00:11:49.900              "filename": "/dev/nvme0n1",
00:11:49.900              "name": "xnvme_bdev"
00:11:49.900            },
00:11:49.900            "method": "bdev_xnvme_create"
00:11:49.900          },
00:11:49.900          {
00:11:49.900            "method": "bdev_wait_for_examine"
00:11:49.900          }
00:11:49.900        ]
00:11:49.900      }
00:11:49.900    ]
00:11:49.900  }
00:11:50.161  [2024-12-09 17:01:12.959466] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:11:50.161  [2024-12-09 17:01:12.959614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70350 ]
00:11:50.161  [2024-12-09 17:01:13.123653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:50.423  [2024-12-09 17:01:13.270707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:50.685  Running I/O for 5 seconds...
00:11:53.023      27025.00 IOPS,   105.57 MiB/s
[2024-12-09T17:01:16.636Z]     27270.00 IOPS,   106.52 MiB/s
[2024-12-09T17:01:17.685Z]     27123.67 IOPS,   105.95 MiB/s
[2024-12-09T17:01:18.642Z]     26814.00 IOPS,   104.74 MiB/s
00:11:55.601                                                                                                  Latency(us)
00:11:55.601  
[2024-12-09T17:01:18.642Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:55.601  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:11:55.601  	 xnvme_bdev          :       5.00   27122.72     105.95       0.00     0.00    2354.89     450.56    9074.22
00:11:55.601  
[2024-12-09T17:01:18.642Z]  ===================================================================================================================
00:11:55.601  
[2024-12-09T17:01:18.642Z]  Total                       :              27122.72     105.95       0.00     0.00    2354.89     450.56    9074.22
00:11:56.545   17:01:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:11:56.545   17:01:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:11:56.545    17:01:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:11:56.545    17:01:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:11:56.545    17:01:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:11:56.545  {
00:11:56.545    "subsystems": [
00:11:56.545      {
00:11:56.545        "subsystem": "bdev",
00:11:56.545        "config": [
00:11:56.545          {
00:11:56.545            "params": {
00:11:56.545              "io_mechanism": "libaio",
00:11:56.545              "conserve_cpu": false,
00:11:56.545              "filename": "/dev/nvme0n1",
00:11:56.545              "name": "xnvme_bdev"
00:11:56.545            },
00:11:56.545            "method": "bdev_xnvme_create"
00:11:56.545          },
00:11:56.545          {
00:11:56.545            "method": "bdev_wait_for_examine"
00:11:56.545          }
00:11:56.545        ]
00:11:56.545      }
00:11:56.545    ]
00:11:56.545  }
00:11:56.807  [2024-12-09 17:01:19.599892] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:11:56.807  [2024-12-09 17:01:19.600064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70432 ]
00:11:56.807  [2024-12-09 17:01:19.768495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:57.069  [2024-12-09 17:01:19.913812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:57.330  Running I/O for 5 seconds...
00:11:59.659      34529.00 IOPS,   134.88 MiB/s
[2024-12-09T17:01:23.640Z]     33446.50 IOPS,   130.65 MiB/s
[2024-12-09T17:01:24.583Z]     33772.67 IOPS,   131.92 MiB/s
[2024-12-09T17:01:25.529Z]     33639.50 IOPS,   131.40 MiB/s
00:12:02.488                                                                                                  Latency(us)
00:12:02.488  
[2024-12-09T17:01:25.529Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:02.488  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:12:02.488  	 xnvme_bdev          :       5.00   32718.20     127.81       0.00     0.00    1951.63     365.49    9376.69
00:12:02.488  
[2024-12-09T17:01:25.529Z]  ===================================================================================================================
00:12:02.488  
[2024-12-09T17:01:25.529Z]  Total                       :              32718.20     127.81       0.00     0.00    1951.63     365.49    9376.69
00:12:03.430  ************************************
00:12:03.430  END TEST xnvme_bdevperf
00:12:03.430  ************************************
00:12:03.430  
00:12:03.430  real	0m13.289s
00:12:03.430  user	0m5.097s
00:12:03.430  sys	0m6.371s
00:12:03.430   17:01:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:03.430   17:01:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:03.430   17:01:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:12:03.430   17:01:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:03.430   17:01:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:03.430   17:01:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:12:03.430  ************************************
00:12:03.430  START TEST xnvme_fio_plugin
00:12:03.430  ************************************
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:03.430    17:01:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:03.430    17:01:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:12:03.430    17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:03.430    17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:03.430    17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:12:03.430    17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:03.430   17:01:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:03.430  {
00:12:03.430    "subsystems": [
00:12:03.430      {
00:12:03.430        "subsystem": "bdev",
00:12:03.430        "config": [
00:12:03.430          {
00:12:03.430            "params": {
00:12:03.430              "io_mechanism": "libaio",
00:12:03.430              "conserve_cpu": false,
00:12:03.430              "filename": "/dev/nvme0n1",
00:12:03.430              "name": "xnvme_bdev"
00:12:03.430            },
00:12:03.430            "method": "bdev_xnvme_create"
00:12:03.430          },
00:12:03.430          {
00:12:03.430            "method": "bdev_wait_for_examine"
00:12:03.430          }
00:12:03.430        ]
00:12:03.430      }
00:12:03.430    ]
00:12:03.430  }
00:12:03.430  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:12:03.430  fio-3.35
00:12:03.430  Starting 1 thread
00:12:10.023  
00:12:10.023  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70554: Mon Dec  9 17:01:32 2024
00:12:10.023    read: IOPS=34.8k, BW=136MiB/s (143MB/s)(680MiB/5001msec)
00:12:10.023      slat (usec): min=4, max=2041, avg=18.36, stdev=88.16
00:12:10.023      clat (usec): min=106, max=4977, avg=1330.21, stdev=500.84
00:12:10.023       lat (usec): min=213, max=4982, avg=1348.57, stdev=492.36
00:12:10.023      clat percentiles (usec):
00:12:10.023       |  1.00th=[  306],  5.00th=[  562], 10.00th=[  725], 20.00th=[  914],
00:12:10.023       | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1434],
00:12:10.023       | 70.00th=[ 1565], 80.00th=[ 1713], 90.00th=[ 1942], 95.00th=[ 2147],
00:12:10.023       | 99.00th=[ 2802], 99.50th=[ 3130], 99.90th=[ 3752], 99.95th=[ 3884],
00:12:10.023       | 99.99th=[ 4424]
00:12:10.023     bw (  KiB/s): min=132344, max=144032, per=99.77%, avg=139007.11, stdev=4550.69, samples=9
00:12:10.023     iops        : min=33086, max=36008, avg=34752.44, stdev=1136.83, samples=9
00:12:10.023    lat (usec)   : 250=0.49%, 500=3.32%, 750=7.19%, 1000=14.73%
00:12:10.023    lat (msec)   : 2=66.07%, 4=8.17%, 10=0.03%
00:12:10.023    cpu          : usr=48.26%, sys=43.38%, ctx=11, majf=0, minf=764
00:12:10.023    IO depths    : 1=0.6%, 2=1.5%, 4=3.4%, 8=8.3%, 16=22.3%, 32=61.8%, >=64=2.2%
00:12:10.023       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:10.023       complete  : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0%
00:12:10.023       issued rwts: total=174187,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:10.023       latency   : target=0, window=0, percentile=100.00%, depth=64
00:12:10.023  
00:12:10.023  Run status group 0 (all jobs):
00:12:10.023     READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=680MiB (713MB), run=5001-5001msec
00:12:10.283  -----------------------------------------------------
00:12:10.283  Suppressions used:
00:12:10.283    count      bytes template
00:12:10.283        1         11 /usr/src/fio/parse.c
00:12:10.283        1          8 libtcmalloc_minimal.so
00:12:10.283        1        904 libcrypto.so
00:12:10.283  -----------------------------------------------------
00:12:10.283  
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:10.544    17:01:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:10.544    17:01:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:12:10.544    17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:10.544    17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:10.544    17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:12:10.544    17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:10.544   17:01:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:10.544  {
00:12:10.544    "subsystems": [
00:12:10.544      {
00:12:10.544        "subsystem": "bdev",
00:12:10.544        "config": [
00:12:10.544          {
00:12:10.544            "params": {
00:12:10.544              "io_mechanism": "libaio",
00:12:10.544              "conserve_cpu": false,
00:12:10.544              "filename": "/dev/nvme0n1",
00:12:10.544              "name": "xnvme_bdev"
00:12:10.544            },
00:12:10.544            "method": "bdev_xnvme_create"
00:12:10.544          },
00:12:10.544          {
00:12:10.544            "method": "bdev_wait_for_examine"
00:12:10.544          }
00:12:10.544        ]
00:12:10.544      }
00:12:10.544    ]
00:12:10.544  }
00:12:10.544  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:12:10.544  fio-3.35
00:12:10.544  Starting 1 thread
00:12:17.133  
00:12:17.133  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70656: Mon Dec  9 17:01:39 2024
00:12:17.133    write: IOPS=36.2k, BW=141MiB/s (148MB/s)(707MiB/5001msec); 0 zone resets
00:12:17.133      slat (usec): min=4, max=1979, avg=19.19, stdev=81.34
00:12:17.133      clat (usec): min=106, max=7926, avg=1243.37, stdev=511.01
00:12:17.133       lat (usec): min=204, max=7931, avg=1262.56, stdev=504.21
00:12:17.133      clat percentiles (usec):
00:12:17.133       |  1.00th=[  289],  5.00th=[  494], 10.00th=[  644], 20.00th=[  824],
00:12:17.133       | 30.00th=[  955], 40.00th=[ 1074], 50.00th=[ 1205], 60.00th=[ 1319],
00:12:17.133       | 70.00th=[ 1467], 80.00th=[ 1631], 90.00th=[ 1876], 95.00th=[ 2089],
00:12:17.133       | 99.00th=[ 2737], 99.50th=[ 3097], 99.90th=[ 3818], 99.95th=[ 4113],
00:12:17.133       | 99.99th=[ 6456]
00:12:17.133     bw (  KiB/s): min=128904, max=157760, per=99.10%, avg=143464.00, stdev=8327.35, samples=9
00:12:17.133     iops        : min=32226, max=39440, avg=35866.00, stdev=2081.84, samples=9
00:12:17.133    lat (usec)   : 250=0.61%, 500=4.62%, 750=9.99%, 1000=18.17%
00:12:17.133    lat (msec)   : 2=60.08%, 4=6.48%, 10=0.06%
00:12:17.133    cpu          : usr=43.58%, sys=46.10%, ctx=15, majf=0, minf=765
00:12:17.133    IO depths    : 1=0.5%, 2=1.2%, 4=3.0%, 8=8.0%, 16=22.8%, 32=62.5%, >=64=2.2%
00:12:17.133       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:17.133       complete  : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0%
00:12:17.133       issued rwts: total=0,180992,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:17.133       latency   : target=0, window=0, percentile=100.00%, depth=64
00:12:17.133  
00:12:17.133  Run status group 0 (all jobs):
00:12:17.133    WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=707MiB (741MB), run=5001-5001msec
00:12:17.395  -----------------------------------------------------
00:12:17.395  Suppressions used:
00:12:17.395    count      bytes template
00:12:17.395        1         11 /usr/src/fio/parse.c
00:12:17.395        1          8 libtcmalloc_minimal.so
00:12:17.395        1        904 libcrypto.so
00:12:17.395  -----------------------------------------------------
00:12:17.395  
00:12:17.657  
00:12:17.657  real	0m14.206s
00:12:17.657  user	0m7.640s
00:12:17.657  sys	0m5.245s
00:12:17.657   17:01:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:17.657   17:01:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:12:17.657  ************************************
00:12:17.657  END TEST xnvme_fio_plugin
00:12:17.657  ************************************
00:12:17.657   17:01:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:12:17.657   17:01:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:12:17.657   17:01:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:12:17.657   17:01:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:12:17.657   17:01:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:17.657   17:01:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:17.657   17:01:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:12:17.657  ************************************
00:12:17.657  START TEST xnvme_rpc
00:12:17.657  ************************************
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70741
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70741
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70741 ']'
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:17.657  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:17.657   17:01:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:17.657  [2024-12-09 17:01:40.611970] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:12:17.657  [2024-12-09 17:01:40.612145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70741 ]
00:12:17.918  [2024-12-09 17:01:40.776930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:17.918  [2024-12-09 17:01:40.927450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:18.862  xnvme_bdev
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]]
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70741
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70741 ']'
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70741
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:12:18.862   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:18.862    17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70741
00:12:19.123   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:19.123   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:19.123  killing process with pid 70741
00:12:19.123   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70741'
00:12:19.123   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70741
00:12:19.123   17:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70741
00:12:21.040  
00:12:21.040  real	0m3.245s
00:12:21.040  user	0m3.095s
00:12:21.040  sys	0m0.630s
00:12:21.040   17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:21.040  ************************************
00:12:21.040  END TEST xnvme_rpc
00:12:21.040  ************************************
00:12:21.040   17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:21.040   17:01:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:12:21.040   17:01:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:21.040   17:01:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:21.040   17:01:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:12:21.040  ************************************
00:12:21.040  START TEST xnvme_bdevperf
00:12:21.040  ************************************
00:12:21.040   17:01:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:12:21.040   17:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:12:21.040   17:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio
00:12:21.040   17:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:21.040   17:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:12:21.040    17:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:12:21.040    17:01:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:12:21.040    17:01:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:21.040  {
00:12:21.040    "subsystems": [
00:12:21.040      {
00:12:21.040        "subsystem": "bdev",
00:12:21.040        "config": [
00:12:21.040          {
00:12:21.040            "params": {
00:12:21.040              "io_mechanism": "libaio",
00:12:21.040              "conserve_cpu": true,
00:12:21.040              "filename": "/dev/nvme0n1",
00:12:21.040              "name": "xnvme_bdev"
00:12:21.040            },
00:12:21.040            "method": "bdev_xnvme_create"
00:12:21.040          },
00:12:21.040          {
00:12:21.040            "method": "bdev_wait_for_examine"
00:12:21.040          }
00:12:21.041        ]
00:12:21.041      }
00:12:21.041    ]
00:12:21.041  }
00:12:21.041  [2024-12-09 17:01:43.914230] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:12:21.041  [2024-12-09 17:01:43.914410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70815 ]
00:12:21.301  [2024-12-09 17:01:44.082744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:21.301  [2024-12-09 17:01:44.232004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:21.562  Running I/O for 5 seconds...
00:12:23.886      33397.00 IOPS,   130.46 MiB/s
[2024-12-09T17:01:47.868Z]     32527.00 IOPS,   127.06 MiB/s
[2024-12-09T17:01:48.810Z]     31838.67 IOPS,   124.37 MiB/s
[2024-12-09T17:01:49.753Z]     32089.50 IOPS,   125.35 MiB/s
00:12:26.712                                                                                                  Latency(us)
00:12:26.712  
[2024-12-09T17:01:49.753Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:26.712  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:12:26.712  	 xnvme_bdev          :       5.00   32125.70     125.49       0.00     0.00    1987.61     356.04    8519.68
00:12:26.712  
[2024-12-09T17:01:49.753Z]  ===================================================================================================================
00:12:26.712  
[2024-12-09T17:01:49.753Z]  Total                       :              32125.70     125.49       0.00     0.00    1987.61     356.04    8519.68
00:12:27.655   17:01:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:27.655   17:01:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:12:27.655    17:01:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:12:27.655    17:01:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:12:27.655    17:01:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:27.655  {
00:12:27.655    "subsystems": [
00:12:27.655      {
00:12:27.655        "subsystem": "bdev",
00:12:27.655        "config": [
00:12:27.655          {
00:12:27.655            "params": {
00:12:27.655              "io_mechanism": "libaio",
00:12:27.655              "conserve_cpu": true,
00:12:27.655              "filename": "/dev/nvme0n1",
00:12:27.655              "name": "xnvme_bdev"
00:12:27.655            },
00:12:27.655            "method": "bdev_xnvme_create"
00:12:27.655          },
00:12:27.655          {
00:12:27.655            "method": "bdev_wait_for_examine"
00:12:27.655          }
00:12:27.655        ]
00:12:27.655      }
00:12:27.655    ]
00:12:27.655  }
00:12:27.655  [2024-12-09 17:01:50.613519] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:12:27.655  [2024-12-09 17:01:50.613904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ]
00:12:27.917  [2024-12-09 17:01:50.781132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:27.917  [2024-12-09 17:01:50.934780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:28.490  Running I/O for 5 seconds...
00:12:30.373      35614.00 IOPS,   139.12 MiB/s
[2024-12-09T17:01:54.357Z]     35515.50 IOPS,   138.73 MiB/s
[2024-12-09T17:01:55.318Z]     35388.33 IOPS,   138.24 MiB/s
[2024-12-09T17:01:56.708Z]     35400.75 IOPS,   138.28 MiB/s
00:12:33.667                                                                                                  Latency(us)
00:12:33.667  
[2024-12-09T17:01:56.708Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:33.667  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:12:33.667  	 xnvme_bdev          :       5.00   33751.27     131.84       0.00     0.00    1891.42      79.16   32062.23
00:12:33.667  
[2024-12-09T17:01:56.708Z]  ===================================================================================================================
00:12:33.667  
[2024-12-09T17:01:56.708Z]  Total                       :              33751.27     131.84       0.00     0.00    1891.42      79.16   32062.23
00:12:34.239  ************************************
00:12:34.239  END TEST xnvme_bdevperf
00:12:34.239  ************************************
00:12:34.239  
00:12:34.239  real	0m13.359s
00:12:34.239  user	0m5.816s
00:12:34.239  sys	0m5.844s
00:12:34.239   17:01:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:34.239   17:01:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:34.239   17:01:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:12:34.239   17:01:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:34.239   17:01:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:34.239   17:01:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:12:34.239  ************************************
00:12:34.239  START TEST xnvme_fio_plugin
00:12:34.239  ************************************
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:34.239   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:34.239    17:01:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:12:34.239    17:01:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:12:34.239    17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:12:34.239    17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:12:34.239    17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:34.239    17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:34.499   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:12:34.499   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:12:34.499   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:12:34.499   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:34.499   17:01:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:34.499  {
00:12:34.499    "subsystems": [
00:12:34.499      {
00:12:34.499        "subsystem": "bdev",
00:12:34.499        "config": [
00:12:34.499          {
00:12:34.499            "params": {
00:12:34.499              "io_mechanism": "libaio",
00:12:34.499              "conserve_cpu": true,
00:12:34.499              "filename": "/dev/nvme0n1",
00:12:34.499              "name": "xnvme_bdev"
00:12:34.499            },
00:12:34.499            "method": "bdev_xnvme_create"
00:12:34.499          },
00:12:34.499          {
00:12:34.499            "method": "bdev_wait_for_examine"
00:12:34.499          }
00:12:34.499        ]
00:12:34.499      }
00:12:34.499    ]
00:12:34.499  }
00:12:34.499  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:12:34.499  fio-3.35
00:12:34.499  Starting 1 thread
00:12:41.093  
00:12:41.093  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71012: Mon Dec  9 17:02:03 2024
00:12:41.093    read: IOPS=32.2k, BW=126MiB/s (132MB/s)(630MiB/5001msec)
00:12:41.093      slat (usec): min=4, max=1877, avg=23.91, stdev=98.33
00:12:41.093      clat (usec): min=106, max=8661, avg=1347.40, stdev=533.95
00:12:41.093       lat (usec): min=186, max=8715, avg=1371.32, stdev=524.43
00:12:41.093      clat percentiles (usec):
00:12:41.093       |  1.00th=[  269],  5.00th=[  506], 10.00th=[  668], 20.00th=[  906],
00:12:41.093       | 30.00th=[ 1090], 40.00th=[ 1221], 50.00th=[ 1336], 60.00th=[ 1450],
00:12:41.093       | 70.00th=[ 1582], 80.00th=[ 1745], 90.00th=[ 1975], 95.00th=[ 2245],
00:12:41.093       | 99.00th=[ 2900], 99.50th=[ 3326], 99.90th=[ 3851], 99.95th=[ 4080],
00:12:41.093       | 99.99th=[ 4490]
00:12:41.093     bw (  KiB/s): min=122352, max=139504, per=100.00%, avg=129357.56, stdev=5751.28, samples=9
00:12:41.093     iops        : min=30588, max=34876, avg=32339.33, stdev=1437.80, samples=9
00:12:41.093    lat (usec)   : 250=0.77%, 500=4.14%, 750=8.15%, 1000=11.85%
00:12:41.093    lat (msec)   : 2=65.69%, 4=9.33%, 10=0.06%
00:12:41.093    cpu          : usr=35.48%, sys=55.70%, ctx=13, majf=0, minf=764
00:12:41.093    IO depths    : 1=0.4%, 2=1.1%, 4=3.0%, 8=8.8%, 16=24.0%, 32=60.6%, >=64=2.0%
00:12:41.093       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:41.093       complete  : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0%
00:12:41.093       issued rwts: total=161256,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:41.093       latency   : target=0, window=0, percentile=100.00%, depth=64
00:12:41.093  
00:12:41.093  Run status group 0 (all jobs):
00:12:41.093     READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=630MiB (661MB), run=5001-5001msec
00:12:41.354  -----------------------------------------------------
00:12:41.354  Suppressions used:
00:12:41.354    count      bytes template
00:12:41.354        1         11 /usr/src/fio/parse.c
00:12:41.354        1          8 libtcmalloc_minimal.so
00:12:41.354        1        904 libcrypto.so
00:12:41.354  -----------------------------------------------------
00:12:41.354  
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:41.354   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:41.355    17:02:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:12:41.355    17:02:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:12:41.355    17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:12:41.355    17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:41.355    17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:12:41.355    17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:41.355   17:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:12:41.355  {
00:12:41.355    "subsystems": [
00:12:41.355      {
00:12:41.355        "subsystem": "bdev",
00:12:41.355        "config": [
00:12:41.355          {
00:12:41.355            "params": {
00:12:41.355              "io_mechanism": "libaio",
00:12:41.355              "conserve_cpu": true,
00:12:41.355              "filename": "/dev/nvme0n1",
00:12:41.355              "name": "xnvme_bdev"
00:12:41.355            },
00:12:41.355            "method": "bdev_xnvme_create"
00:12:41.355          },
00:12:41.355          {
00:12:41.355            "method": "bdev_wait_for_examine"
00:12:41.355          }
00:12:41.355        ]
00:12:41.355      }
00:12:41.355    ]
00:12:41.355  }
00:12:41.617  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:12:41.617  fio-3.35
00:12:41.617  Starting 1 thread
00:12:48.225  
00:12:48.225  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71110: Mon Dec  9 17:02:10 2024
00:12:48.225    write: IOPS=30.9k, BW=121MiB/s (127MB/s)(604MiB/5001msec); 0 zone resets
00:12:48.225      slat (usec): min=4, max=1886, avg=26.41, stdev=100.59
00:12:48.225      clat (usec): min=82, max=5924, avg=1366.27, stdev=582.49
00:12:48.225       lat (usec): min=206, max=5929, avg=1392.67, stdev=574.10
00:12:48.225      clat percentiles (usec):
00:12:48.225       |  1.00th=[  273],  5.00th=[  486], 10.00th=[  652], 20.00th=[  881],
00:12:48.225       | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1319], 60.00th=[ 1467],
00:12:48.225       | 70.00th=[ 1614], 80.00th=[ 1795], 90.00th=[ 2114], 95.00th=[ 2409],
00:12:48.225       | 99.00th=[ 3097], 99.50th=[ 3359], 99.90th=[ 3916], 99.95th=[ 4178],
00:12:48.225       | 99.99th=[ 4686]
00:12:48.225     bw (  KiB/s): min=114976, max=137264, per=99.84%, avg=123396.33, stdev=8308.24, samples=9
00:12:48.225     iops        : min=28744, max=34316, avg=30849.00, stdev=2077.10, samples=9
00:12:48.225    lat (usec)   : 100=0.01%, 250=0.74%, 500=4.57%, 750=8.59%, 1000=13.01%
00:12:48.225    lat (msec)   : 2=60.41%, 4=12.61%, 10=0.08%
00:12:48.225    cpu          : usr=32.74%, sys=57.54%, ctx=19, majf=0, minf=765
00:12:48.225    IO depths    : 1=0.3%, 2=0.9%, 4=2.7%, 8=8.6%, 16=24.3%, 32=61.2%, >=64=2.0%
00:12:48.225       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:48.225       complete  : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0%
00:12:48.225       issued rwts: total=0,154521,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:48.225       latency   : target=0, window=0, percentile=100.00%, depth=64
00:12:48.225  
00:12:48.225  Run status group 0 (all jobs):
00:12:48.225    WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=604MiB (633MB), run=5001-5001msec
00:12:48.485  -----------------------------------------------------
00:12:48.485  Suppressions used:
00:12:48.485    count      bytes template
00:12:48.485        1         11 /usr/src/fio/parse.c
00:12:48.485        1          8 libtcmalloc_minimal.so
00:12:48.485        1        904 libcrypto.so
00:12:48.485  -----------------------------------------------------
00:12:48.485  
00:12:48.485  ************************************
00:12:48.485  END TEST xnvme_fio_plugin
00:12:48.485  ************************************
00:12:48.485  
00:12:48.485  real	0m14.232s
00:12:48.485  user	0m6.488s
00:12:48.485  sys	0m6.416s
00:12:48.485   17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:48.485   17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:12:48.746   17:02:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:12:48.746   17:02:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:48.746   17:02:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:48.746   17:02:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:12:48.746  ************************************
00:12:48.746  START TEST xnvme_rpc
00:12:48.746  ************************************
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71197
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71197
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71197 ']'
00:12:48.746  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:48.746   17:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:48.746  [2024-12-09 17:02:11.661530] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:12:48.746  [2024-12-09 17:02:11.661705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71197 ]
00:12:49.007  [2024-12-09 17:02:11.829695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:49.007  [2024-12-09 17:02:11.981325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring ''
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:49.952  xnvme_bdev
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]]
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71197
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71197 ']'
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71197
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:49.952    17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71197
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:49.952  killing process with pid 71197
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71197'
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71197
00:12:49.952   17:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71197
00:12:51.867  
00:12:51.867  real	0m3.270s
00:12:51.867  user	0m3.140s
00:12:51.867  sys	0m0.637s
00:12:51.867   17:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:51.867  ************************************
00:12:51.867  END TEST xnvme_rpc
00:12:51.867  ************************************
00:12:51.867   17:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:51.867   17:02:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:12:51.867   17:02:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:51.867   17:02:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:51.867   17:02:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:12:51.867  ************************************
00:12:51.867  START TEST xnvme_bdevperf
00:12:51.867  ************************************
00:12:51.867   17:02:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:12:51.867   17:02:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:12:51.867   17:02:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring
00:12:51.867   17:02:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:51.867   17:02:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:12:51.867    17:02:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:12:51.867    17:02:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:12:51.867    17:02:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:52.128  {
00:12:52.128    "subsystems": [
00:12:52.128      {
00:12:52.128        "subsystem": "bdev",
00:12:52.128        "config": [
00:12:52.128          {
00:12:52.128            "params": {
00:12:52.128              "io_mechanism": "io_uring",
00:12:52.128              "conserve_cpu": false,
00:12:52.128              "filename": "/dev/nvme0n1",
00:12:52.128              "name": "xnvme_bdev"
00:12:52.128            },
00:12:52.128            "method": "bdev_xnvme_create"
00:12:52.128          },
00:12:52.128          {
00:12:52.128            "method": "bdev_wait_for_examine"
00:12:52.128          }
00:12:52.128        ]
00:12:52.128      }
00:12:52.128    ]
00:12:52.128  }
00:12:52.128  [2024-12-09 17:02:14.985400] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:12:52.128  [2024-12-09 17:02:14.985815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71277 ]
00:12:52.128  [2024-12-09 17:02:15.156095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:52.389  [2024-12-09 17:02:15.296365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:52.651  Running I/O for 5 seconds...
00:12:55.032      33790.00 IOPS,   131.99 MiB/s
[2024-12-09T17:02:18.645Z]     33595.50 IOPS,   131.23 MiB/s
[2024-12-09T17:02:20.033Z]     33445.33 IOPS,   130.65 MiB/s
[2024-12-09T17:02:20.977Z]     33137.25 IOPS,   129.44 MiB/s
[2024-12-09T17:02:20.977Z]     33282.80 IOPS,   130.01 MiB/s
00:12:57.936                                                                                                  Latency(us)
00:12:57.936  
[2024-12-09T17:02:20.977Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:57.936  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:12:57.936  	 xnvme_bdev          :       5.01   33234.28     129.82       0.00     0.00    1919.10     269.39   12300.60
00:12:57.936  
[2024-12-09T17:02:20.977Z]  ===================================================================================================================
00:12:57.936  
[2024-12-09T17:02:20.977Z]  Total                       :              33234.28     129.82       0.00     0.00    1919.10     269.39   12300.60
00:12:58.509   17:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:12:58.509   17:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:12:58.509    17:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:12:58.509    17:02:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:12:58.509    17:02:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:58.770  {
00:12:58.770    "subsystems": [
00:12:58.770      {
00:12:58.770        "subsystem": "bdev",
00:12:58.770        "config": [
00:12:58.770          {
00:12:58.770            "params": {
00:12:58.770              "io_mechanism": "io_uring",
00:12:58.770              "conserve_cpu": false,
00:12:58.770              "filename": "/dev/nvme0n1",
00:12:58.770              "name": "xnvme_bdev"
00:12:58.770            },
00:12:58.770            "method": "bdev_xnvme_create"
00:12:58.770          },
00:12:58.770          {
00:12:58.770            "method": "bdev_wait_for_examine"
00:12:58.770          }
00:12:58.770        ]
00:12:58.770      }
00:12:58.770    ]
00:12:58.770  }
00:12:58.770  [2024-12-09 17:02:21.623342] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:12:58.771  [2024-12-09 17:02:21.623785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ]
00:12:58.771  [2024-12-09 17:02:21.794036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:59.032  [2024-12-09 17:02:21.946829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:59.293  Running I/O for 5 seconds...
00:13:01.252      15165.00 IOPS,    59.24 MiB/s
[2024-12-09T17:02:25.686Z]     16199.00 IOPS,    63.28 MiB/s
[2024-12-09T17:02:26.629Z]     19120.00 IOPS,    74.69 MiB/s
[2024-12-09T17:02:27.584Z]     16312.00 IOPS,    63.72 MiB/s
[2024-12-09T17:02:27.584Z]     18659.80 IOPS,    72.89 MiB/s
00:13:04.543                                                                                                  Latency(us)
00:13:04.543  
[2024-12-09T17:02:27.584Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:04.543  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:13:04.543  	 xnvme_bdev          :       5.02   18614.35      72.71       0.00     0.00    3428.90      82.31   32062.23
00:13:04.543  
[2024-12-09T17:02:27.584Z]  ===================================================================================================================
00:13:04.543  
[2024-12-09T17:02:27.584Z]  Total                       :              18614.35      72.71       0.00     0.00    3428.90      82.31   32062.23
00:13:05.164  
00:13:05.164  real	0m13.302s
00:13:05.164  user	0m6.091s
00:13:05.164  sys	0m6.909s
00:13:05.164   17:02:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:05.164   17:02:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:05.164  ************************************
00:13:05.164  END TEST xnvme_bdevperf
00:13:05.164  ************************************
00:13:05.426   17:02:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:13:05.426   17:02:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:05.426   17:02:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:05.426   17:02:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:13:05.426  ************************************
00:13:05.426  START TEST xnvme_fio_plugin
00:13:05.426  ************************************
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:13:05.426    17:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:13:05.426    17:02:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:13:05.426    17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:13:05.426    17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:05.426    17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:13:05.426    17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:13:05.426   17:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:05.426  {
00:13:05.426    "subsystems": [
00:13:05.426      {
00:13:05.426        "subsystem": "bdev",
00:13:05.426        "config": [
00:13:05.426          {
00:13:05.426            "params": {
00:13:05.426              "io_mechanism": "io_uring",
00:13:05.426              "conserve_cpu": false,
00:13:05.426              "filename": "/dev/nvme0n1",
00:13:05.426              "name": "xnvme_bdev"
00:13:05.426            },
00:13:05.426            "method": "bdev_xnvme_create"
00:13:05.426          },
00:13:05.426          {
00:13:05.426            "method": "bdev_wait_for_examine"
00:13:05.426          }
00:13:05.426        ]
00:13:05.426      }
00:13:05.426    ]
00:13:05.426  }
00:13:05.688  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:13:05.688  fio-3.35
00:13:05.688  Starting 1 thread
00:13:12.282  
00:13:12.282  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71470: Mon Dec  9 17:02:34 2024
00:13:12.282    read: IOPS=35.3k, BW=138MiB/s (145MB/s)(690MiB/5001msec)
00:13:12.282      slat (usec): min=2, max=147, avg= 4.11, stdev= 2.24
00:13:12.282      clat (usec): min=1035, max=10133, avg=1645.23, stdev=274.35
00:13:12.282       lat (usec): min=1038, max=10148, avg=1649.34, stdev=274.71
00:13:12.282      clat percentiles (usec):
00:13:12.282       |  1.00th=[ 1270],  5.00th=[ 1352], 10.00th=[ 1401], 20.00th=[ 1467],
00:13:12.282       | 30.00th=[ 1516], 40.00th=[ 1565], 50.00th=[ 1598], 60.00th=[ 1663],
00:13:12.282       | 70.00th=[ 1713], 80.00th=[ 1795], 90.00th=[ 1926], 95.00th=[ 2073],
00:13:12.282       | 99.00th=[ 2343], 99.50th=[ 2442], 99.90th=[ 2802], 99.95th=[ 3720],
00:13:12.282       | 99.99th=[10028]
00:13:12.282     bw (  KiB/s): min=135680, max=147456, per=100.00%, avg=141824.00, stdev=3537.99, samples=9
00:13:12.282     iops        : min=33920, max=36864, avg=35456.00, stdev=884.50, samples=9
00:13:12.282    lat (msec)   : 2=92.94%, 4=7.03%, 10=0.02%, 20=0.02%
00:13:12.282    cpu          : usr=32.00%, sys=66.48%, ctx=12, majf=0, minf=762
00:13:12.282    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:13:12.282       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:12.282       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:13:12.282       issued rwts: total=176512,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:12.282       latency   : target=0, window=0, percentile=100.00%, depth=64
00:13:12.282  
00:13:12.282  Run status group 0 (all jobs):
00:13:12.282     READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=690MiB (723MB), run=5001-5001msec
00:13:12.547  -----------------------------------------------------
00:13:12.547  Suppressions used:
00:13:12.547    count      bytes template
00:13:12.547        1         11 /usr/src/fio/parse.c
00:13:12.547        1          8 libtcmalloc_minimal.so
00:13:12.547        1        904 libcrypto.so
00:13:12.547  -----------------------------------------------------
00:13:12.547  
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:12.547    17:02:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:13:12.547    17:02:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:13:12.547    17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:13:12.547    17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:12.547    17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:13:12.547    17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:13:12.547  {
00:13:12.547    "subsystems": [
00:13:12.547      {
00:13:12.547        "subsystem": "bdev",
00:13:12.547        "config": [
00:13:12.547          {
00:13:12.547            "params": {
00:13:12.547              "io_mechanism": "io_uring",
00:13:12.547              "conserve_cpu": false,
00:13:12.547              "filename": "/dev/nvme0n1",
00:13:12.547              "name": "xnvme_bdev"
00:13:12.547            },
00:13:12.547            "method": "bdev_xnvme_create"
00:13:12.547          },
00:13:12.547          {
00:13:12.547            "method": "bdev_wait_for_examine"
00:13:12.547          }
00:13:12.547        ]
00:13:12.547      }
00:13:12.547    ]
00:13:12.547  }
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:13:12.547   17:02:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:12.547  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:13:12.547  fio-3.35
00:13:12.547  Starting 1 thread
00:13:19.162  
00:13:19.162  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71563: Mon Dec  9 17:02:41 2024
00:13:19.162    write: IOPS=34.7k, BW=136MiB/s (142MB/s)(679MiB/5001msec); 0 zone resets
00:13:19.162      slat (nsec): min=2910, max=65217, avg=4655.42, stdev=2384.26
00:13:19.162      clat (usec): min=152, max=4485, avg=1655.89, stdev=261.28
00:13:19.162       lat (usec): min=166, max=4503, avg=1660.55, stdev=261.82
00:13:19.162      clat percentiles (usec):
00:13:19.162       |  1.00th=[ 1221],  5.00th=[ 1319], 10.00th=[ 1369], 20.00th=[ 1450],
00:13:19.162       | 30.00th=[ 1500], 40.00th=[ 1565], 50.00th=[ 1614], 60.00th=[ 1680],
00:13:19.162       | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 2008], 95.00th=[ 2147],
00:13:19.162       | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2999], 99.95th=[ 3097],
00:13:19.162       | 99.99th=[ 3261]
00:13:19.162     bw (  KiB/s): min=132088, max=146725, per=99.99%, avg=138969.44, stdev=5842.13, samples=9
00:13:19.162     iops        : min=33022, max=36681, avg=34742.22, stdev=1460.49, samples=9
00:13:19.162    lat (usec)   : 250=0.01%, 750=0.01%
00:13:19.162    lat (msec)   : 2=89.99%, 4=10.01%, 10=0.01%
00:13:19.162    cpu          : usr=33.74%, sys=64.72%, ctx=12, majf=0, minf=763
00:13:19.162    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:13:19.162       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:19.162       complete  : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0%
00:13:19.162       issued rwts: total=0,173771,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:19.162       latency   : target=0, window=0, percentile=100.00%, depth=64
00:13:19.162  
00:13:19.162  Run status group 0 (all jobs):
00:13:19.162    WRITE: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5001-5001msec
00:13:19.423  -----------------------------------------------------
00:13:19.423  Suppressions used:
00:13:19.423    count      bytes template
00:13:19.423        1         11 /usr/src/fio/parse.c
00:13:19.423        1          8 libtcmalloc_minimal.so
00:13:19.423        1        904 libcrypto.so
00:13:19.423  -----------------------------------------------------
00:13:19.423  
00:13:19.423  ************************************
00:13:19.423  END TEST xnvme_fio_plugin
00:13:19.423  ************************************
00:13:19.423  
00:13:19.423  real	0m14.150s
00:13:19.423  user	0m6.371s
00:13:19.423  sys	0m7.283s
00:13:19.423   17:02:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:19.423   17:02:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:13:19.685   17:02:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:13:19.685   17:02:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:13:19.685   17:02:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:13:19.685   17:02:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:13:19.685   17:02:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:19.685   17:02:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:19.685   17:02:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:13:19.685  ************************************
00:13:19.685  START TEST xnvme_rpc
00:13:19.685  ************************************
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71649
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71649
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71649 ']'
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:19.685  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:19.685   17:02:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:19.685  [2024-12-09 17:02:42.600970] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:13:19.685  [2024-12-09 17:02:42.601156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71649 ]
00:13:19.947  [2024-12-09 17:02:42.768796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:19.947  [2024-12-09 17:02:42.907996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:20.889  xnvme_bdev
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]]
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71649
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71649 ']'
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71649
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:20.889    17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71649
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:20.889  killing process with pid 71649
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71649'
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71649
00:13:20.889   17:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71649
00:13:22.807  
00:13:22.807  real	0m2.880s
00:13:22.807  user	0m2.875s
00:13:22.807  sys	0m0.517s
00:13:22.807  ************************************
00:13:22.807  END TEST xnvme_rpc
00:13:22.807  ************************************
00:13:22.807   17:02:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:22.807   17:02:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:22.807   17:02:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:13:22.807   17:02:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:22.807   17:02:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:22.807   17:02:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:13:22.807  ************************************
00:13:22.807  START TEST xnvme_bdevperf
00:13:22.807  ************************************
00:13:22.807   17:02:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:13:22.807   17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:13:22.807   17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring
00:13:22.807   17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:22.807   17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:13:22.807    17:02:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:13:22.807    17:02:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:13:22.807    17:02:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:22.807  {
00:13:22.807    "subsystems": [
00:13:22.807      {
00:13:22.807        "subsystem": "bdev",
00:13:22.807        "config": [
00:13:22.807          {
00:13:22.807            "params": {
00:13:22.807              "io_mechanism": "io_uring",
00:13:22.807              "conserve_cpu": true,
00:13:22.807              "filename": "/dev/nvme0n1",
00:13:22.807              "name": "xnvme_bdev"
00:13:22.807            },
00:13:22.807            "method": "bdev_xnvme_create"
00:13:22.807          },
00:13:22.807          {
00:13:22.807            "method": "bdev_wait_for_examine"
00:13:22.807          }
00:13:22.807        ]
00:13:22.807      }
00:13:22.807    ]
00:13:22.807  }
00:13:22.808  [2024-12-09 17:02:45.491522] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:13:22.808  [2024-12-09 17:02:45.491637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71718 ]
00:13:22.808  [2024-12-09 17:02:45.649799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:22.808  [2024-12-09 17:02:45.751531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:23.069  Running I/O for 5 seconds...
00:13:25.398      42259.00 IOPS,   165.07 MiB/s
[2024-12-09T17:02:49.030Z]     39393.00 IOPS,   153.88 MiB/s
[2024-12-09T17:02:50.422Z]     37122.67 IOPS,   145.01 MiB/s
[2024-12-09T17:02:51.367Z]     36046.75 IOPS,   140.81 MiB/s
[2024-12-09T17:02:51.367Z]     36166.80 IOPS,   141.28 MiB/s
00:13:28.326                                                                                                  Latency(us)
00:13:28.326  
[2024-12-09T17:02:51.367Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:28.326  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:13:28.326  	 xnvme_bdev          :       5.01   36137.20     141.16       0.00     0.00    1766.50     363.91   17241.01
00:13:28.326  
[2024-12-09T17:02:51.367Z]  ===================================================================================================================
00:13:28.326  
[2024-12-09T17:02:51.367Z]  Total                       :              36137.20     141.16       0.00     0.00    1766.50     363.91   17241.01
00:13:28.900   17:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:28.900   17:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:13:28.900    17:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:13:28.900    17:02:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:13:28.900    17:02:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:28.900  {
00:13:28.900    "subsystems": [
00:13:28.900      {
00:13:28.900        "subsystem": "bdev",
00:13:28.900        "config": [
00:13:28.900          {
00:13:28.900            "params": {
00:13:28.900              "io_mechanism": "io_uring",
00:13:28.900              "conserve_cpu": true,
00:13:28.900              "filename": "/dev/nvme0n1",
00:13:28.900              "name": "xnvme_bdev"
00:13:28.900            },
00:13:28.900            "method": "bdev_xnvme_create"
00:13:28.900          },
00:13:28.900          {
00:13:28.900            "method": "bdev_wait_for_examine"
00:13:28.900          }
00:13:28.900        ]
00:13:28.900      }
00:13:28.900    ]
00:13:28.900  }
00:13:28.900  [2024-12-09 17:02:51.789391] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:13:28.900  [2024-12-09 17:02:51.789663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71793 ]
00:13:29.162  [2024-12-09 17:02:51.950096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:29.162  [2024-12-09 17:02:52.048521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:29.423  Running I/O for 5 seconds...
00:13:31.305      18568.00 IOPS,    72.53 MiB/s
[2024-12-09T17:02:55.727Z]     21921.00 IOPS,    85.63 MiB/s
[2024-12-09T17:02:56.669Z]     21587.00 IOPS,    84.32 MiB/s
[2024-12-09T17:02:57.641Z]     21243.00 IOPS,    82.98 MiB/s
[2024-12-09T17:02:57.641Z]     20268.00 IOPS,    79.17 MiB/s
00:13:34.600                                                                                                  Latency(us)
00:13:34.600  
[2024-12-09T17:02:57.641Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:34.600  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:13:34.600  	 xnvme_bdev          :       5.01   20222.17      78.99       0.00     0.00    3155.51      54.74   29642.44
00:13:34.600  
[2024-12-09T17:02:57.641Z]  ===================================================================================================================
00:13:34.600  
[2024-12-09T17:02:57.641Z]  Total                       :              20222.17      78.99       0.00     0.00    3155.51      54.74   29642.44
00:13:35.172  
00:13:35.172  real	0m12.773s
00:13:35.172  user	0m8.618s
00:13:35.172  sys	0m3.320s
00:13:35.172  ************************************
00:13:35.172  END TEST xnvme_bdevperf
00:13:35.172  ************************************
00:13:35.172   17:02:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:35.172   17:02:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:35.432   17:02:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:13:35.432   17:02:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:35.432   17:02:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:35.432   17:02:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:13:35.432  ************************************
00:13:35.432  START TEST xnvme_fio_plugin
00:13:35.432  ************************************
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:13:35.432    17:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:13:35.432    17:02:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:13:35.432    17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:13:35.432    17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:35.432    17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:13:35.432    17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:13:35.432   17:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:35.432  {
00:13:35.432    "subsystems": [
00:13:35.432      {
00:13:35.432        "subsystem": "bdev",
00:13:35.432        "config": [
00:13:35.432          {
00:13:35.432            "params": {
00:13:35.432              "io_mechanism": "io_uring",
00:13:35.432              "conserve_cpu": true,
00:13:35.432              "filename": "/dev/nvme0n1",
00:13:35.432              "name": "xnvme_bdev"
00:13:35.432            },
00:13:35.432            "method": "bdev_xnvme_create"
00:13:35.432          },
00:13:35.432          {
00:13:35.432            "method": "bdev_wait_for_examine"
00:13:35.432          }
00:13:35.432        ]
00:13:35.432      }
00:13:35.432    ]
00:13:35.432  }
00:13:35.693  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:13:35.693  fio-3.35
00:13:35.693  Starting 1 thread
00:13:42.286  
00:13:42.286  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71918: Mon Dec  9 17:03:04 2024
00:13:42.286    read: IOPS=35.0k, BW=137MiB/s (143MB/s)(684MiB/5001msec)
00:13:42.286      slat (usec): min=2, max=169, avg= 4.08, stdev= 2.31
00:13:42.286      clat (usec): min=810, max=3670, avg=1659.16, stdev=245.28
00:13:42.286       lat (usec): min=813, max=3708, avg=1663.23, stdev=245.83
00:13:42.286      clat percentiles (usec):
00:13:42.286       |  1.00th=[ 1156],  5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1467],
00:13:42.286       | 30.00th=[ 1532], 40.00th=[ 1582], 50.00th=[ 1631], 60.00th=[ 1680],
00:13:42.286       | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 1975], 95.00th=[ 2114],
00:13:42.286       | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2802], 99.95th=[ 2966],
00:13:42.286       | 99.99th=[ 3490]
00:13:42.286     bw (  KiB/s): min=134095, max=155136, per=100.00%, avg=140680.78, stdev=5992.83, samples=9
00:13:42.286     iops        : min=33523, max=38784, avg=35170.11, stdev=1498.31, samples=9
00:13:42.286    lat (usec)   : 1000=0.16%
00:13:42.286    lat (msec)   : 2=91.09%, 4=8.75%
00:13:42.286    cpu          : usr=40.72%, sys=54.66%, ctx=12, majf=0, minf=762
00:13:42.286    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:13:42.286       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:42.286       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:13:42.286       issued rwts: total=175168,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:42.286       latency   : target=0, window=0, percentile=100.00%, depth=64
00:13:42.286  
00:13:42.286  Run status group 0 (all jobs):
00:13:42.286     READ: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=684MiB (717MB), run=5001-5001msec
00:13:42.286  -----------------------------------------------------
00:13:42.286  Suppressions used:
00:13:42.286    count      bytes template
00:13:42.286        1         11 /usr/src/fio/parse.c
00:13:42.286        1          8 libtcmalloc_minimal.so
00:13:42.286        1        904 libcrypto.so
00:13:42.286  -----------------------------------------------------
00:13:42.286  
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:13:42.286    17:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:13:42.286    17:03:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:13:42.286    17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:13:42.286    17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:13:42.286    17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:13:42.286    17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:13:42.286   17:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:13:42.286  {
00:13:42.286    "subsystems": [
00:13:42.286      {
00:13:42.286        "subsystem": "bdev",
00:13:42.286        "config": [
00:13:42.286          {
00:13:42.286            "params": {
00:13:42.286              "io_mechanism": "io_uring",
00:13:42.286              "conserve_cpu": true,
00:13:42.286              "filename": "/dev/nvme0n1",
00:13:42.286              "name": "xnvme_bdev"
00:13:42.286            },
00:13:42.286            "method": "bdev_xnvme_create"
00:13:42.286          },
00:13:42.286          {
00:13:42.286            "method": "bdev_wait_for_examine"
00:13:42.286          }
00:13:42.286        ]
00:13:42.286      }
00:13:42.286    ]
00:13:42.286  }
00:13:42.546  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:13:42.546  fio-3.35
00:13:42.546  Starting 1 thread
00:13:49.130  
00:13:49.130  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72004: Mon Dec  9 17:03:11 2024
00:13:49.130    write: IOPS=36.0k, BW=141MiB/s (148MB/s)(704MiB/5001msec); 0 zone resets
00:13:49.130      slat (usec): min=2, max=352, avg= 4.33, stdev= 3.03
00:13:49.130      clat (usec): min=855, max=5570, avg=1602.99, stdev=257.43
00:13:49.130       lat (usec): min=859, max=5574, avg=1607.32, stdev=257.88
00:13:49.130      clat percentiles (usec):
00:13:49.130       |  1.00th=[ 1156],  5.00th=[ 1270], 10.00th=[ 1319], 20.00th=[ 1401],
00:13:49.130       | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1631],
00:13:49.130       | 70.00th=[ 1696], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2073],
00:13:49.130       | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 3032], 99.95th=[ 3294],
00:13:49.130       | 99.99th=[ 3851]
00:13:49.130     bw (  KiB/s): min=134624, max=151952, per=100.00%, avg=144111.11, stdev=5039.16, samples=9
00:13:49.130     iops        : min=33656, max=37988, avg=36027.78, stdev=1259.79, samples=9
00:13:49.130    lat (usec)   : 1000=0.03%
00:13:49.130    lat (msec)   : 2=93.03%, 4=6.94%, 10=0.01%
00:13:49.130    cpu          : usr=46.28%, sys=48.90%, ctx=119, majf=0, minf=763
00:13:49.130    IO depths    : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6%
00:13:49.130       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:49.130       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0%
00:13:49.130       issued rwts: total=0,180166,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:49.130       latency   : target=0, window=0, percentile=100.00%, depth=64
00:13:49.130  
00:13:49.130  Run status group 0 (all jobs):
00:13:49.130    WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=704MiB (738MB), run=5001-5001msec
00:13:49.392  -----------------------------------------------------
00:13:49.392  Suppressions used:
00:13:49.392    count      bytes template
00:13:49.392        1         11 /usr/src/fio/parse.c
00:13:49.392        1          8 libtcmalloc_minimal.so
00:13:49.392        1        904 libcrypto.so
00:13:49.392  -----------------------------------------------------
00:13:49.392  
00:13:49.392  ************************************
00:13:49.392  END TEST xnvme_fio_plugin
00:13:49.392  ************************************
00:13:49.392  
00:13:49.392  real	0m13.948s
00:13:49.392  user	0m7.311s
00:13:49.392  sys	0m5.829s
00:13:49.392   17:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:49.392   17:03:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:13:49.392   17:03:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:13:49.392   17:03:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:49.392   17:03:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:49.392   17:03:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:13:49.392  ************************************
00:13:49.392  START TEST xnvme_rpc
00:13:49.392  ************************************
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72096
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72096
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72096 ']'
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:49.392  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:49.392   17:03:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:49.392  [2024-12-09 17:03:12.394392] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:13:49.392  [2024-12-09 17:03:12.394593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72096 ]
00:13:49.653  [2024-12-09 17:03:12.563543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:49.653  [2024-12-09 17:03:12.688380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd ''
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:50.599  xnvme_bdev
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]]
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]]
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72096
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72096 ']'
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72096
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:50.599    17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72096
00:13:50.599  killing process with pid 72096
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72096'
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72096
00:13:50.599   17:03:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72096
00:13:52.516  ************************************
00:13:52.516  END TEST xnvme_rpc
00:13:52.516  ************************************
00:13:52.516  
00:13:52.516  real	0m2.963s
00:13:52.516  user	0m2.953s
00:13:52.516  sys	0m0.498s
00:13:52.516   17:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:52.516   17:03:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:13:52.516   17:03:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:13:52.516   17:03:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:52.516   17:03:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:52.516   17:03:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:13:52.516  ************************************
00:13:52.516  START TEST xnvme_bdevperf
00:13:52.516  ************************************
00:13:52.516   17:03:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:13:52.516   17:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:13:52.516   17:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd
00:13:52.516   17:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:52.516   17:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:13:52.516    17:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:13:52.516    17:03:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:13:52.516    17:03:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:52.516  {
00:13:52.516    "subsystems": [
00:13:52.516      {
00:13:52.516        "subsystem": "bdev",
00:13:52.516        "config": [
00:13:52.516          {
00:13:52.516            "params": {
00:13:52.516              "io_mechanism": "io_uring_cmd",
00:13:52.516              "conserve_cpu": false,
00:13:52.516              "filename": "/dev/ng0n1",
00:13:52.516              "name": "xnvme_bdev"
00:13:52.516            },
00:13:52.516            "method": "bdev_xnvme_create"
00:13:52.516          },
00:13:52.516          {
00:13:52.516            "method": "bdev_wait_for_examine"
00:13:52.516          }
00:13:52.516        ]
00:13:52.516      }
00:13:52.516    ]
00:13:52.516  }
00:13:52.516  [2024-12-09 17:03:15.415015] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:13:52.516  [2024-12-09 17:03:15.415398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72160 ]
00:13:52.778  [2024-12-09 17:03:15.579530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:52.778  [2024-12-09 17:03:15.709448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:53.040  Running I/O for 5 seconds...
00:13:55.370      35478.00 IOPS,   138.59 MiB/s
[2024-12-09T17:03:19.358Z]     34859.50 IOPS,   136.17 MiB/s
[2024-12-09T17:03:20.303Z]     34281.33 IOPS,   133.91 MiB/s
[2024-12-09T17:03:21.268Z]     34722.25 IOPS,   135.63 MiB/s
[2024-12-09T17:03:21.268Z]     34516.80 IOPS,   134.83 MiB/s
00:13:58.227                                                                                                  Latency(us)
00:13:58.227  
[2024-12-09T17:03:21.268Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:58.227  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:13:58.227  	 xnvme_bdev          :       5.01   34490.26     134.73       0.00     0.00    1850.69     321.38   10435.35
00:13:58.227  
[2024-12-09T17:03:21.268Z]  ===================================================================================================================
00:13:58.227  
[2024-12-09T17:03:21.268Z]  Total                       :              34490.26     134.73       0.00     0.00    1850.69     321.38   10435.35
00:13:58.798   17:03:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:13:58.798   17:03:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:13:58.798    17:03:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:13:58.798    17:03:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:13:58.798    17:03:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:59.059  {
00:13:59.059    "subsystems": [
00:13:59.059      {
00:13:59.059        "subsystem": "bdev",
00:13:59.059        "config": [
00:13:59.059          {
00:13:59.059            "params": {
00:13:59.059              "io_mechanism": "io_uring_cmd",
00:13:59.059              "conserve_cpu": false,
00:13:59.059              "filename": "/dev/ng0n1",
00:13:59.059              "name": "xnvme_bdev"
00:13:59.059            },
00:13:59.059            "method": "bdev_xnvme_create"
00:13:59.059          },
00:13:59.059          {
00:13:59.059            "method": "bdev_wait_for_examine"
00:13:59.059          }
00:13:59.059        ]
00:13:59.059      }
00:13:59.059    ]
00:13:59.059  }
00:13:59.059  [2024-12-09 17:03:21.910227] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:13:59.059  [2024-12-09 17:03:21.910372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72240 ]
00:13:59.059  [2024-12-09 17:03:22.076079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:59.320  [2024-12-09 17:03:22.203586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:59.580  Running I/O for 5 seconds...
00:14:01.467      20310.00 IOPS,    79.34 MiB/s
[2024-12-09T17:03:25.893Z]     20316.00 IOPS,    79.36 MiB/s
[2024-12-09T17:03:26.837Z]     18245.67 IOPS,    71.27 MiB/s
[2024-12-09T17:03:27.780Z]     19624.50 IOPS,    76.66 MiB/s
[2024-12-09T17:03:27.780Z]     21509.80 IOPS,    84.02 MiB/s
00:14:04.739                                                                                                  Latency(us)
00:14:04.739  
[2024-12-09T17:03:27.780Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:04.739  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:14:04.739  	 xnvme_bdev          :       5.00   21507.97      84.02       0.00     0.00    2971.36      83.50   18148.43
00:14:04.739  
[2024-12-09T17:03:27.780Z]  ===================================================================================================================
00:14:04.739  
[2024-12-09T17:03:27.780Z]  Total                       :              21507.97      84.02       0.00     0.00    2971.36      83.50   18148.43
00:14:05.360   17:03:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:05.360   17:03:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096
00:14:05.360    17:03:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:05.360    17:03:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:05.360    17:03:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:05.664  {
00:14:05.664    "subsystems": [
00:14:05.664      {
00:14:05.664        "subsystem": "bdev",
00:14:05.664        "config": [
00:14:05.664          {
00:14:05.664            "params": {
00:14:05.664              "io_mechanism": "io_uring_cmd",
00:14:05.664              "conserve_cpu": false,
00:14:05.664              "filename": "/dev/ng0n1",
00:14:05.664              "name": "xnvme_bdev"
00:14:05.664            },
00:14:05.664            "method": "bdev_xnvme_create"
00:14:05.664          },
00:14:05.664          {
00:14:05.664            "method": "bdev_wait_for_examine"
00:14:05.664          }
00:14:05.664        ]
00:14:05.664      }
00:14:05.664    ]
00:14:05.664  }
00:14:05.664  [2024-12-09 17:03:28.462365] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:05.664  [2024-12-09 17:03:28.462546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72315 ]
00:14:05.664  [2024-12-09 17:03:28.631985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:05.932  [2024-12-09 17:03:28.798427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:06.193  Running I/O for 5 seconds...
00:14:08.522      70912.00 IOPS,   277.00 MiB/s
[2024-12-09T17:03:32.507Z]     70656.00 IOPS,   276.00 MiB/s
[2024-12-09T17:03:33.451Z]     71253.33 IOPS,   278.33 MiB/s
[2024-12-09T17:03:34.394Z]     76416.00 IOPS,   298.50 MiB/s
[2024-12-09T17:03:34.394Z]     79718.40 IOPS,   311.40 MiB/s
00:14:11.353                                                                                                  Latency(us)
00:14:11.353  
[2024-12-09T17:03:34.394Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:11.353  Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096)
00:14:11.353  	 xnvme_bdev          :       5.00   79707.00     311.36       0.00     0.00     799.49     475.77    3112.96
00:14:11.353  
[2024-12-09T17:03:34.394Z]  ===================================================================================================================
00:14:11.353  
[2024-12-09T17:03:34.394Z]  Total                       :              79707.00     311.36       0.00     0.00     799.49     475.77    3112.96
00:14:11.926   17:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:11.926   17:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096
00:14:11.926    17:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:11.926    17:03:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:11.926    17:03:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:11.926  {
00:14:11.926    "subsystems": [
00:14:11.926      {
00:14:11.926        "subsystem": "bdev",
00:14:11.926        "config": [
00:14:11.926          {
00:14:11.926            "params": {
00:14:11.926              "io_mechanism": "io_uring_cmd",
00:14:11.926              "conserve_cpu": false,
00:14:11.926              "filename": "/dev/ng0n1",
00:14:11.926              "name": "xnvme_bdev"
00:14:11.926            },
00:14:11.926            "method": "bdev_xnvme_create"
00:14:11.926          },
00:14:11.926          {
00:14:11.926            "method": "bdev_wait_for_examine"
00:14:11.926          }
00:14:11.926        ]
00:14:11.926      }
00:14:11.926    ]
00:14:11.926  }
00:14:11.926  [2024-12-09 17:03:34.806483] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:11.926  [2024-12-09 17:03:34.806600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72390 ]
00:14:11.926  [2024-12-09 17:03:34.963649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:12.187  [2024-12-09 17:03:35.059251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:12.447  Running I/O for 5 seconds...
00:14:14.324      17674.00 IOPS,    69.04 MiB/s
[2024-12-09T17:03:38.307Z]     17347.00 IOPS,    67.76 MiB/s
[2024-12-09T17:03:39.691Z]     16786.67 IOPS,    65.57 MiB/s
[2024-12-09T17:03:40.633Z]     13377.50 IOPS,    52.26 MiB/s
[2024-12-09T17:03:40.895Z]     11966.80 IOPS,    46.75 MiB/s
00:14:17.854                                                                                                  Latency(us)
00:14:17.854  
[2024-12-09T17:03:40.895Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:17.854  Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096)
00:14:17.854  	 xnvme_bdev          :       5.38   11131.55      43.48       0.00     0.00    5544.12     100.82  574297.01
00:14:17.854  
[2024-12-09T17:03:40.895Z]  ===================================================================================================================
00:14:17.854  
[2024-12-09T17:03:40.895Z]  Total                       :              11131.55      43.48       0.00     0.00    5544.12     100.82  574297.01
00:14:18.798  ************************************
00:14:18.798  END TEST xnvme_bdevperf
00:14:18.798  ************************************
00:14:18.798  
00:14:18.798  real	0m26.203s
00:14:18.798  user	0m14.860s
00:14:18.798  sys	0m10.856s
00:14:18.799   17:03:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:18.799   17:03:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:18.799   17:03:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:14:18.799   17:03:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:18.799   17:03:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:18.799   17:03:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:14:18.799  ************************************
00:14:18.799  START TEST xnvme_fio_plugin
00:14:18.799  ************************************
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:14:18.799    17:03:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:14:18.799    17:03:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:14:18.799    17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:14:18.799    17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:14:18.799    17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:14:18.799    17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:14:18.799   17:03:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:14:18.799  {
00:14:18.799    "subsystems": [
00:14:18.799      {
00:14:18.799        "subsystem": "bdev",
00:14:18.799        "config": [
00:14:18.799          {
00:14:18.799            "params": {
00:14:18.799              "io_mechanism": "io_uring_cmd",
00:14:18.799              "conserve_cpu": false,
00:14:18.799              "filename": "/dev/ng0n1",
00:14:18.799              "name": "xnvme_bdev"
00:14:18.799            },
00:14:18.799            "method": "bdev_xnvme_create"
00:14:18.799          },
00:14:18.799          {
00:14:18.799            "method": "bdev_wait_for_examine"
00:14:18.799          }
00:14:18.799        ]
00:14:18.799      }
00:14:18.799    ]
00:14:18.799  }
00:14:18.799  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:14:18.799  fio-3.35
00:14:18.799  Starting 1 thread
00:14:25.393  
00:14:25.393  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72514: Mon Dec  9 17:03:47 2024
00:14:25.393    read: IOPS=39.4k, BW=154MiB/s (161MB/s)(770MiB/5004msec)
00:14:25.393      slat (usec): min=2, max=537, avg= 3.45, stdev= 2.36
00:14:25.393      clat (usec): min=283, max=9625, avg=1488.57, stdev=350.09
00:14:25.393       lat (usec): min=297, max=9630, avg=1492.02, stdev=350.57
00:14:25.393      clat percentiles (usec):
00:14:25.393       |  1.00th=[ 1004],  5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1205],
00:14:25.393       | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1418], 60.00th=[ 1516],
00:14:25.393       | 70.00th=[ 1631], 80.00th=[ 1745], 90.00th=[ 1909], 95.00th=[ 2073],
00:14:25.393       | 99.00th=[ 2507], 99.50th=[ 2769], 99.90th=[ 4047], 99.95th=[ 4752],
00:14:25.393       | 99.99th=[ 6521]
00:14:25.393     bw (  KiB/s): min=123512, max=181760, per=100.00%, avg=157602.40, stdev=18172.75, samples=10
00:14:25.393     iops        : min=30878, max=45440, avg=39400.60, stdev=4543.19, samples=10
00:14:25.393    lat (usec)   : 500=0.01%, 1000=0.90%
00:14:25.393    lat (msec)   : 2=92.15%, 4=6.84%, 10=0.10%
00:14:25.393    cpu          : usr=40.06%, sys=58.66%, ctx=38, majf=0, minf=762
00:14:25.393    IO depths    : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6%
00:14:25.393       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:25.393       complete  : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0%
00:14:25.393       issued rwts: total=197056,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:25.393       latency   : target=0, window=0, percentile=100.00%, depth=64
00:14:25.393  
00:14:25.393  Run status group 0 (all jobs):
00:14:25.393     READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=770MiB (807MB), run=5004-5004msec
00:14:25.654  -----------------------------------------------------
00:14:25.654  Suppressions used:
00:14:25.654    count      bytes template
00:14:25.654        1         11 /usr/src/fio/parse.c
00:14:25.654        1          8 libtcmalloc_minimal.so
00:14:25.654        1        904 libcrypto.so
00:14:25.654  -----------------------------------------------------
00:14:25.654  
00:14:25.654   17:03:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:25.654   17:03:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:14:25.654   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:14:25.655    17:03:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:14:25.655    17:03:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:14:25.655    17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:14:25.655   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:14:25.655    17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:14:25.655    17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:14:25.655    17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:14:25.915   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:14:25.915   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:14:25.915   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:14:25.915   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:14:25.915   17:03:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:14:25.915  {
00:14:25.915    "subsystems": [
00:14:25.915      {
00:14:25.915        "subsystem": "bdev",
00:14:25.915        "config": [
00:14:25.915          {
00:14:25.915            "params": {
00:14:25.915              "io_mechanism": "io_uring_cmd",
00:14:25.915              "conserve_cpu": false,
00:14:25.915              "filename": "/dev/ng0n1",
00:14:25.915              "name": "xnvme_bdev"
00:14:25.915            },
00:14:25.915            "method": "bdev_xnvme_create"
00:14:25.915          },
00:14:25.915          {
00:14:25.915            "method": "bdev_wait_for_examine"
00:14:25.915          }
00:14:25.915        ]
00:14:25.915      }
00:14:25.915    ]
00:14:25.915  }
00:14:25.915  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:14:25.915  fio-3.35
00:14:25.915  Starting 1 thread
00:14:32.498  
00:14:32.498  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72605: Mon Dec  9 17:03:54 2024
00:14:32.498    write: IOPS=18.9k, BW=73.9MiB/s (77.5MB/s)(370MiB/5009msec); 0 zone resets
00:14:32.498      slat (nsec): min=2917, max=86168, avg=4359.82, stdev=2507.40
00:14:32.498      clat (usec): min=51, max=20723, avg=3277.10, stdev=4400.05
00:14:32.498       lat (usec): min=54, max=20726, avg=3281.46, stdev=4400.08
00:14:32.498      clat percentiles (usec):
00:14:32.498       |  1.00th=[  202],  5.00th=[  437], 10.00th=[  635], 20.00th=[  807],
00:14:32.498       | 30.00th=[ 1139], 40.00th=[ 1369], 50.00th=[ 1500], 60.00th=[ 1614],
00:14:32.498       | 70.00th=[ 1762], 80.00th=[ 2278], 90.00th=[12256], 95.00th=[13698],
00:14:32.498       | 99.00th=[15533], 99.50th=[16319], 99.90th=[17695], 99.95th=[18482],
00:14:32.498       | 99.99th=[19530]
00:14:32.498     bw (  KiB/s): min=49432, max=104968, per=100.00%, avg=75774.40, stdev=21996.94, samples=10
00:14:32.498     iops        : min=12358, max=26242, avg=18943.60, stdev=5499.24, samples=10
00:14:32.498    lat (usec)   : 100=0.05%, 250=1.42%, 500=5.02%, 750=10.42%, 1000=9.59%
00:14:32.498    lat (msec)   : 2=50.68%, 4=4.97%, 10=1.96%, 20=15.89%, 50=0.01%
00:14:32.498    cpu          : usr=34.74%, sys=64.16%, ctx=13, majf=0, minf=763
00:14:32.498    IO depths    : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.3%, 16=12.7%, 32=66.8%, >=64=8.8%
00:14:32.498       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:32.498       complete  : 0=0.0%, 4=95.9%, 8=1.5%, 16=1.4%, 32=0.5%, 64=0.7%, >=64=0.0%
00:14:32.498       issued rwts: total=0,94768,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:32.498       latency   : target=0, window=0, percentile=100.00%, depth=64
00:14:32.498  
00:14:32.498  Run status group 0 (all jobs):
00:14:32.498    WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=370MiB (388MB), run=5009-5009msec
00:14:32.760  -----------------------------------------------------
00:14:32.760  Suppressions used:
00:14:32.760    count      bytes template
00:14:32.760        1         11 /usr/src/fio/parse.c
00:14:32.760        1          8 libtcmalloc_minimal.so
00:14:32.760        1        904 libcrypto.so
00:14:32.760  -----------------------------------------------------
00:14:32.760  
00:14:32.760  ************************************
00:14:32.760  END TEST xnvme_fio_plugin
00:14:32.760  ************************************
00:14:32.760  
00:14:32.760  real	0m14.119s
00:14:32.760  user	0m6.744s
00:14:32.760  sys	0m6.928s
00:14:32.760   17:03:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:32.760   17:03:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:14:32.760   17:03:55 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:14:32.760   17:03:55 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:14:32.760   17:03:55 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:14:32.760   17:03:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:14:32.760   17:03:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:32.760   17:03:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:32.760   17:03:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:14:33.022  ************************************
00:14:33.022  START TEST xnvme_rpc
00:14:33.022  ************************************
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72690
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72690
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72690 ']'
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:33.022  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:33.022   17:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:14:33.022  [2024-12-09 17:03:55.903663] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:33.022  [2024-12-09 17:03:55.903840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72690 ]
00:14:33.301  [2024-12-09 17:03:56.071124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:33.301  [2024-12-09 17:03:56.218368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:34.271  xnvme_bdev
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]]
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]]
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72690
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72690 ']'
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72690
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:34.271    17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72690
00:14:34.271  killing process with pid 72690
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72690'
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72690
00:14:34.271   17:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72690
00:14:36.186  ************************************
00:14:36.186  END TEST xnvme_rpc
00:14:36.186  ************************************
00:14:36.186  
00:14:36.186  real	0m3.236s
00:14:36.186  user	0m3.098s
00:14:36.186  sys	0m0.620s
00:14:36.186   17:03:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:36.186   17:03:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:36.186   17:03:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:14:36.186   17:03:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:36.186   17:03:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:36.186   17:03:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:14:36.186  ************************************
00:14:36.186  START TEST xnvme_bdevperf
00:14:36.186  ************************************
00:14:36.187   17:03:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:14:36.187   17:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:14:36.187   17:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd
00:14:36.187   17:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:36.187   17:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:14:36.187    17:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:36.187    17:03:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:36.187    17:03:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:36.187  {
00:14:36.187    "subsystems": [
00:14:36.187      {
00:14:36.187        "subsystem": "bdev",
00:14:36.187        "config": [
00:14:36.187          {
00:14:36.187            "params": {
00:14:36.187              "io_mechanism": "io_uring_cmd",
00:14:36.187              "conserve_cpu": true,
00:14:36.187              "filename": "/dev/ng0n1",
00:14:36.187              "name": "xnvme_bdev"
00:14:36.187            },
00:14:36.187            "method": "bdev_xnvme_create"
00:14:36.187          },
00:14:36.187          {
00:14:36.187            "method": "bdev_wait_for_examine"
00:14:36.187          }
00:14:36.187        ]
00:14:36.187      }
00:14:36.187    ]
00:14:36.187  }
00:14:36.187  [2024-12-09 17:03:59.184623] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:36.187  [2024-12-09 17:03:59.184819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72766 ]
00:14:36.448  [2024-12-09 17:03:59.350960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:36.708  [2024-12-09 17:03:59.492980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:36.970  Running I/O for 5 seconds...
00:14:38.857      39780.00 IOPS,   155.39 MiB/s
[2024-12-09T17:04:02.840Z]     38829.50 IOPS,   151.68 MiB/s
[2024-12-09T17:04:04.223Z]     38974.00 IOPS,   152.24 MiB/s
[2024-12-09T17:04:05.165Z]     38077.50 IOPS,   148.74 MiB/s
00:14:42.124                                                                                                  Latency(us)
00:14:42.124  
[2024-12-09T17:04:05.165Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:42.124  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:14:42.124  	 xnvme_bdev          :       5.00   37353.34     145.91       0.00     0.00    1708.64     699.47   12149.37
00:14:42.124  
[2024-12-09T17:04:05.165Z]  ===================================================================================================================
00:14:42.124  
[2024-12-09T17:04:05.165Z]  Total                       :              37353.34     145.91       0.00     0.00    1708.64     699.47   12149.37
00:14:42.695   17:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:42.695   17:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:14:42.695    17:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:42.695    17:04:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:42.695    17:04:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:42.695  {
00:14:42.695    "subsystems": [
00:14:42.695      {
00:14:42.695        "subsystem": "bdev",
00:14:42.695        "config": [
00:14:42.695          {
00:14:42.695            "params": {
00:14:42.695              "io_mechanism": "io_uring_cmd",
00:14:42.695              "conserve_cpu": true,
00:14:42.695              "filename": "/dev/ng0n1",
00:14:42.695              "name": "xnvme_bdev"
00:14:42.695            },
00:14:42.695            "method": "bdev_xnvme_create"
00:14:42.695          },
00:14:42.695          {
00:14:42.695            "method": "bdev_wait_for_examine"
00:14:42.695          }
00:14:42.695        ]
00:14:42.695      }
00:14:42.695    ]
00:14:42.695  }
00:14:42.956  [2024-12-09 17:04:05.777288] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:42.956  [2024-12-09 17:04:05.777455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72839 ]
00:14:42.956  [2024-12-09 17:04:05.945579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:43.217  [2024-12-09 17:04:06.096206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:43.478  Running I/O for 5 seconds...
00:14:45.810      17560.00 IOPS,    68.59 MiB/s
[2024-12-09T17:04:09.837Z]     17759.00 IOPS,    69.37 MiB/s
[2024-12-09T17:04:10.779Z]     17976.67 IOPS,    70.22 MiB/s
[2024-12-09T17:04:11.722Z]     18262.25 IOPS,    71.34 MiB/s
[2024-12-09T17:04:11.722Z]     17807.00 IOPS,    69.56 MiB/s
00:14:48.681                                                                                                  Latency(us)
00:14:48.681  
[2024-12-09T17:04:11.722Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:48.681  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:14:48.681  	 xnvme_bdev          :       5.01   17810.04      69.57       0.00     0.00    3588.88      68.14   21374.82
00:14:48.681  
[2024-12-09T17:04:11.722Z]  ===================================================================================================================
00:14:48.681  
[2024-12-09T17:04:11.722Z]  Total                       :              17810.04      69.57       0.00     0.00    3588.88      68.14   21374.82
00:14:49.626   17:04:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:49.626    17:04:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:49.626   17:04:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096
00:14:49.626    17:04:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:49.626    17:04:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:49.626  {
00:14:49.626    "subsystems": [
00:14:49.626      {
00:14:49.626        "subsystem": "bdev",
00:14:49.626        "config": [
00:14:49.626          {
00:14:49.626            "params": {
00:14:49.626              "io_mechanism": "io_uring_cmd",
00:14:49.626              "conserve_cpu": true,
00:14:49.626              "filename": "/dev/ng0n1",
00:14:49.626              "name": "xnvme_bdev"
00:14:49.626            },
00:14:49.626            "method": "bdev_xnvme_create"
00:14:49.626          },
00:14:49.626          {
00:14:49.626            "method": "bdev_wait_for_examine"
00:14:49.626          }
00:14:49.626        ]
00:14:49.626      }
00:14:49.626    ]
00:14:49.626  }
00:14:49.626  [2024-12-09 17:04:12.439748] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:49.626  [2024-12-09 17:04:12.439991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72919 ]
00:14:49.626  [2024-12-09 17:04:12.622309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:49.887  [2024-12-09 17:04:12.781264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:50.148  Running I/O for 5 seconds...
00:14:52.478      78976.00 IOPS,   308.50 MiB/s
[2024-12-09T17:04:16.456Z]     77408.00 IOPS,   302.38 MiB/s
[2024-12-09T17:04:17.394Z]     81770.67 IOPS,   319.42 MiB/s
[2024-12-09T17:04:18.324Z]     85312.00 IOPS,   333.25 MiB/s
00:14:55.283                                                                                                  Latency(us)
00:14:55.283  
[2024-12-09T17:04:18.324Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:55.283  Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096)
00:14:55.283  	 xnvme_bdev          :       5.00   87502.28     341.81       0.00     0.00     727.98     345.01    2999.53
00:14:55.283  
[2024-12-09T17:04:18.324Z]  ===================================================================================================================
00:14:55.283  
[2024-12-09T17:04:18.324Z]  Total                       :              87502.28     341.81       0.00     0.00     727.98     345.01    2999.53
00:14:55.848   17:04:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:55.848   17:04:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096
00:14:55.848    17:04:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:55.848    17:04:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:55.848    17:04:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:55.848  {
00:14:55.848    "subsystems": [
00:14:55.848      {
00:14:55.848        "subsystem": "bdev",
00:14:55.848        "config": [
00:14:55.848          {
00:14:55.848            "params": {
00:14:55.848              "io_mechanism": "io_uring_cmd",
00:14:55.848              "conserve_cpu": true,
00:14:55.848              "filename": "/dev/ng0n1",
00:14:55.848              "name": "xnvme_bdev"
00:14:55.848            },
00:14:55.848            "method": "bdev_xnvme_create"
00:14:55.848          },
00:14:55.848          {
00:14:55.848            "method": "bdev_wait_for_examine"
00:14:55.848          }
00:14:55.848        ]
00:14:55.848      }
00:14:55.848    ]
00:14:55.848  }
00:14:55.848  [2024-12-09 17:04:18.790418] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:14:55.848  [2024-12-09 17:04:18.790544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72989 ]
00:14:56.105  [2024-12-09 17:04:18.950274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:56.105  [2024-12-09 17:04:19.045310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:56.424  Running I/O for 5 seconds...
00:14:58.296      14012.00 IOPS,    54.73 MiB/s
[2024-12-09T17:04:22.277Z]     14514.00 IOPS,    56.70 MiB/s
[2024-12-09T17:04:23.652Z]     14080.67 IOPS,    55.00 MiB/s
[2024-12-09T17:04:24.586Z]     14475.25 IOPS,    56.54 MiB/s
[2024-12-09T17:04:24.844Z]     12687.00 IOPS,    49.56 MiB/s
00:15:01.803                                                                                                  Latency(us)
00:15:01.803  
[2024-12-09T17:04:24.844Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:01.803  Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096)
00:15:01.803  	 xnvme_bdev          :       5.40   11760.21      45.94       0.00     0.00    5240.56      56.71  745295.56
00:15:01.803  
[2024-12-09T17:04:24.844Z]  ===================================================================================================================
00:15:01.803  
[2024-12-09T17:04:24.844Z]  Total                       :              11760.21      45.94       0.00     0.00    5240.56      56.71  745295.56
00:15:02.371  
00:15:02.371  real	0m26.299s
00:15:02.371  user	0m19.638s
00:15:02.371  sys	0m5.299s
00:15:02.371   17:04:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:02.371   17:04:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:02.371  ************************************
00:15:02.371  END TEST xnvme_bdevperf
00:15:02.371  ************************************
00:15:02.633   17:04:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:15:02.633   17:04:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:02.633   17:04:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:02.633   17:04:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:02.633  ************************************
00:15:02.633  START TEST xnvme_fio_plugin
00:15:02.633  ************************************
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:02.633    17:04:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:02.633    17:04:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:02.633    17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:02.633    17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:02.633    17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:02.633    17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:02.633   17:04:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:02.633  {
00:15:02.633    "subsystems": [
00:15:02.633      {
00:15:02.633        "subsystem": "bdev",
00:15:02.633        "config": [
00:15:02.633          {
00:15:02.633            "params": {
00:15:02.633              "io_mechanism": "io_uring_cmd",
00:15:02.633              "conserve_cpu": true,
00:15:02.633              "filename": "/dev/ng0n1",
00:15:02.633              "name": "xnvme_bdev"
00:15:02.633            },
00:15:02.633            "method": "bdev_xnvme_create"
00:15:02.633          },
00:15:02.633          {
00:15:02.633            "method": "bdev_wait_for_examine"
00:15:02.633          }
00:15:02.633        ]
00:15:02.633      }
00:15:02.633    ]
00:15:02.633  }
00:15:02.633  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:02.633  fio-3.35
00:15:02.633  Starting 1 thread
00:15:09.222  
00:15:09.222  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73107: Mon Dec  9 17:04:31 2024
00:15:09.222    read: IOPS=40.1k, BW=157MiB/s (164MB/s)(784MiB/5002msec)
00:15:09.222      slat (usec): min=2, max=1079, avg= 3.58, stdev= 3.41
00:15:09.222      clat (usec): min=603, max=8422, avg=1450.13, stdev=289.05
00:15:09.222       lat (usec): min=606, max=8425, avg=1453.71, stdev=289.43
00:15:09.222      clat percentiles (usec):
00:15:09.222       |  1.00th=[  840],  5.00th=[ 1029], 10.00th=[ 1106], 20.00th=[ 1205],
00:15:09.222       | 30.00th=[ 1287], 40.00th=[ 1369], 50.00th=[ 1434], 60.00th=[ 1516],
00:15:09.222       | 70.00th=[ 1582], 80.00th=[ 1680], 90.00th=[ 1811], 95.00th=[ 1942],
00:15:09.222       | 99.00th=[ 2180], 99.50th=[ 2278], 99.90th=[ 2638], 99.95th=[ 2999],
00:15:09.222       | 99.99th=[ 4686]
00:15:09.222     bw (  KiB/s): min=141312, max=187528, per=99.59%, avg=159890.11, stdev=15470.78, samples=9
00:15:09.222     iops        : min=35328, max=46882, avg=39972.44, stdev=3867.63, samples=9
00:15:09.222    lat (usec)   : 750=0.34%, 1000=3.71%
00:15:09.222    lat (msec)   : 2=92.54%, 4=3.38%, 10=0.03%
00:15:09.222    cpu          : usr=57.45%, sys=39.29%, ctx=17, majf=0, minf=762
00:15:09.222    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:15:09.222       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:09.222       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:15:09.222       issued rwts: total=200766,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:09.222       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:09.222  
00:15:09.222  Run status group 0 (all jobs):
00:15:09.222     READ: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=784MiB (822MB), run=5002-5002msec
00:15:09.483  -----------------------------------------------------
00:15:09.483  Suppressions used:
00:15:09.483    count      bytes template
00:15:09.483        1         11 /usr/src/fio/parse.c
00:15:09.483        1          8 libtcmalloc_minimal.so
00:15:09.483        1        904 libcrypto.so
00:15:09.483  -----------------------------------------------------
00:15:09.483  
00:15:09.483   17:04:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:09.483   17:04:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:09.484    17:04:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:09.484    17:04:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:09.484    17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:09.484    17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:09.484    17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:09.484    17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:09.484   17:04:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:09.484  {
00:15:09.484    "subsystems": [
00:15:09.484      {
00:15:09.484        "subsystem": "bdev",
00:15:09.484        "config": [
00:15:09.484          {
00:15:09.484            "params": {
00:15:09.484              "io_mechanism": "io_uring_cmd",
00:15:09.484              "conserve_cpu": true,
00:15:09.484              "filename": "/dev/ng0n1",
00:15:09.484              "name": "xnvme_bdev"
00:15:09.484            },
00:15:09.484            "method": "bdev_xnvme_create"
00:15:09.484          },
00:15:09.484          {
00:15:09.484            "method": "bdev_wait_for_examine"
00:15:09.484          }
00:15:09.484        ]
00:15:09.484      }
00:15:09.484    ]
00:15:09.484  }
00:15:09.768  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:09.768  fio-3.35
00:15:09.768  Starting 1 thread
00:15:16.359  
00:15:16.359  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73198: Mon Dec  9 17:04:38 2024
00:15:16.359    write: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(416MiB/5009msec); 0 zone resets
00:15:16.359      slat (usec): min=2, max=134, avg= 4.47, stdev= 2.80
00:15:16.359      clat (usec): min=53, max=26136, avg=2888.46, stdev=3971.76
00:15:16.359       lat (usec): min=56, max=26139, avg=2892.93, stdev=3971.78
00:15:16.359      clat percentiles (usec):
00:15:16.359       |  1.00th=[  192],  5.00th=[  437], 10.00th=[  611], 20.00th=[  898],
00:15:16.359       | 30.00th=[ 1352], 40.00th=[ 1467], 50.00th=[ 1549], 60.00th=[ 1631],
00:15:16.359       | 70.00th=[ 1729], 80.00th=[ 1909], 90.00th=[11600], 95.00th=[13173],
00:15:16.359       | 99.00th=[15270], 99.50th=[15926], 99.90th=[18744], 99.95th=[21627],
00:15:16.359       | 99.99th=[22938]
00:15:16.359     bw (  KiB/s): min=47320, max=145304, per=100.00%, avg=85047.40, stdev=41025.85, samples=10
00:15:16.359     iops        : min=11830, max=36326, avg=21261.80, stdev=10256.47, samples=10
00:15:16.359    lat (usec)   : 100=0.07%, 250=1.47%, 500=4.59%, 750=10.77%, 1000=4.03%
00:15:16.359    lat (msec)   : 2=61.82%, 4=3.60%, 10=1.06%, 20=12.50%, 50=0.09%
00:15:16.359    cpu          : usr=69.09%, sys=23.70%, ctx=14, majf=0, minf=763
00:15:16.359    IO depths    : 1=1.0%, 2=1.9%, 4=3.9%, 8=7.9%, 16=15.8%, 32=61.9%, >=64=7.6%
00:15:16.359       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:16.359       complete  : 0=0.0%, 4=96.4%, 8=1.3%, 16=1.1%, 32=0.3%, 64=0.9%, >=64=0.0%
00:15:16.359       issued rwts: total=0,106388,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:16.359       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:16.359  
00:15:16.359  Run status group 0 (all jobs):
00:15:16.359    WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=416MiB (436MB), run=5009-5009msec
00:15:16.621  -----------------------------------------------------
00:15:16.621  Suppressions used:
00:15:16.621    count      bytes template
00:15:16.621        1         11 /usr/src/fio/parse.c
00:15:16.621        1          8 libtcmalloc_minimal.so
00:15:16.621        1        904 libcrypto.so
00:15:16.621  -----------------------------------------------------
00:15:16.621  
00:15:16.621  
00:15:16.621  real	0m13.994s
00:15:16.621  user	0m9.293s
00:15:16.621  sys	0m3.847s
00:15:16.621   17:04:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:16.621  ************************************
00:15:16.621  END TEST xnvme_fio_plugin
00:15:16.621  ************************************
00:15:16.621   17:04:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:16.621   17:04:39 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72690
00:15:16.621   17:04:39 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72690 ']'
00:15:16.621  Process with pid 72690 is not found
00:15:16.621   17:04:39 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72690
00:15:16.621  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72690) - No such process
00:15:16.621   17:04:39 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72690 is not found'
00:15:16.621  
00:15:16.621  real	3m37.001s
00:15:16.621  user	2m3.070s
00:15:16.621  sys	1m19.245s
00:15:16.621  ************************************
00:15:16.621  END TEST nvme_xnvme
00:15:16.621   17:04:39 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:16.621   17:04:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:16.621  ************************************
00:15:16.621   17:04:39  -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme
00:15:16.621   17:04:39  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:16.621   17:04:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:16.621   17:04:39  -- common/autotest_common.sh@10 -- # set +x
00:15:16.621  ************************************
00:15:16.621  START TEST blockdev_xnvme
00:15:16.621  ************************************
00:15:16.621   17:04:39 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme
00:15:16.621  * Looking for test storage...
00:15:16.621  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:15:16.621    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:16.621     17:04:39 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version
00:15:16.621     17:04:39 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:16.883    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@345 -- # : 1
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:16.883     17:04:39 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1
00:15:16.883     17:04:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1
00:15:16.883     17:04:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:16.883     17:04:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1
00:15:16.883    17:04:39 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:15:16.883     17:04:39 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2
00:15:16.884     17:04:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2
00:15:16.884     17:04:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:16.884     17:04:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2
00:15:16.884    17:04:39 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:15:16.884    17:04:39 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:16.884    17:04:39 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:16.884    17:04:39 blockdev_xnvme -- scripts/common.sh@368 -- # return 0
00:15:16.884    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:16.884    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:16.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:16.884  		--rc genhtml_branch_coverage=1
00:15:16.884  		--rc genhtml_function_coverage=1
00:15:16.884  		--rc genhtml_legend=1
00:15:16.884  		--rc geninfo_all_blocks=1
00:15:16.884  		--rc geninfo_unexecuted_blocks=1
00:15:16.884  		
00:15:16.884  		'
00:15:16.884    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:16.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:16.884  		--rc genhtml_branch_coverage=1
00:15:16.884  		--rc genhtml_function_coverage=1
00:15:16.884  		--rc genhtml_legend=1
00:15:16.884  		--rc geninfo_all_blocks=1
00:15:16.884  		--rc geninfo_unexecuted_blocks=1
00:15:16.884  		
00:15:16.884  		'
00:15:16.884    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:16.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:16.884  		--rc genhtml_branch_coverage=1
00:15:16.884  		--rc genhtml_function_coverage=1
00:15:16.884  		--rc genhtml_legend=1
00:15:16.884  		--rc geninfo_all_blocks=1
00:15:16.884  		--rc geninfo_unexecuted_blocks=1
00:15:16.884  		
00:15:16.884  		'
00:15:16.884    17:04:39 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:16.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:16.884  		--rc genhtml_branch_coverage=1
00:15:16.884  		--rc genhtml_function_coverage=1
00:15:16.884  		--rc genhtml_legend=1
00:15:16.884  		--rc geninfo_all_blocks=1
00:15:16.884  		--rc geninfo_unexecuted_blocks=1
00:15:16.884  		
00:15:16.884  		'
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:15:16.884    17:04:39 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@20 -- # :
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:15:16.884    17:04:39 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device=
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek=
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx=
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]]
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]]
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73338
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73338
00:15:16.884   17:04:39 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73338 ']'
00:15:16.884   17:04:39 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:16.884   17:04:39 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:16.884   17:04:39 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:15:16.884  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:16.884   17:04:39 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:16.884   17:04:39 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:16.884   17:04:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:16.884  [2024-12-09 17:04:39.827197] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:15:16.884  [2024-12-09 17:04:39.827375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73338 ]
00:15:17.147  [2024-12-09 17:04:39.996377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:17.147  [2024-12-09 17:04:40.162395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:18.092   17:04:40 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:18.092   17:04:40 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0
00:15:18.092   17:04:40 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:15:18.092   17:04:40 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf
00:15:18.092   17:04:40 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring
00:15:18.092   17:04:40 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes
00:15:18.092   17:04:40 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:15:18.665  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:15:19.238  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:15:19.238  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:15:19.238  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:15:19.238  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:15:19.238   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:15:19.238   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]]
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 ))
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:19.239    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c'
00:15:19.239  nvme0n1
00:15:19.239  nvme0n2
00:15:19.239  nvme0n3
00:15:19.239  nvme1n1
00:15:19.239  nvme2n1
00:15:19.239  nvme3n1
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:19.239   17:04:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat
00:15:19.239    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.239    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.239    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:15:19.239    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:19.239    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:15:19.239    17:04:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:19.239   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:15:19.240    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "nvme0n1",' '  "aliases": [' '    "c7f1eb36-c017-4680-b9f4-19a2be785b8f"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "c7f1eb36-c017-4680-b9f4-19a2be785b8f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n2",' '  "aliases": [' '    "dca67072-ce97-4774-8815-b908bb1addba"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "dca67072-ce97-4774-8815-b908bb1addba",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n3",' '  "aliases": [' '    "f35b22cf-1fe6-425d-98a3-4b091a914fac"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "f35b22cf-1fe6-425d-98a3-4b091a914fac",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme1n1",' '  "aliases": [' '    "c194976f-16a9-4e6f-8bce-94138842a85e"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "c194976f-16a9-4e6f-8bce-94138842a85e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme2n1",' '  "aliases": [' '    "4efb12c4-1ce8-44ef-bfaa-32b6211d88f1"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "4efb12c4-1ce8-44ef-bfaa-32b6211d88f1",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme3n1",' '  "aliases": [' '    "5fde65bb-8a1f-452b-868e-2052cdf0a15d"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "5fde65bb-8a1f-452b-868e-2052cdf0a15d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}'
00:15:19.240    17:04:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name
00:15:19.501   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:15:19.501   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1
00:15:19.501   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:15:19.501   17:04:42 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73338
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73338 ']'
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73338
00:15:19.501    17:04:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:19.501    17:04:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73338
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:19.501  killing process with pid 73338
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73338'
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73338
00:15:19.501   17:04:42 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73338
00:15:21.424   17:04:44 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:15:21.425   17:04:44 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 ''
00:15:21.425   17:04:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:15:21.425   17:04:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:21.425   17:04:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:21.425  ************************************
00:15:21.425  START TEST bdev_hello_world
00:15:21.425  ************************************
00:15:21.425   17:04:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 ''
00:15:21.425  [2024-12-09 17:04:44.261908] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:15:21.425  [2024-12-09 17:04:44.262081] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73622 ]
00:15:21.425  [2024-12-09 17:04:44.430731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:21.685  [2024-12-09 17:04:44.574594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:22.257  [2024-12-09 17:04:45.035518] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:15:22.257  [2024-12-09 17:04:45.035600] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1
00:15:22.257  [2024-12-09 17:04:45.035621] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:15:22.257  [2024-12-09 17:04:45.038025] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:15:22.257  [2024-12-09 17:04:45.039621] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:15:22.257  [2024-12-09 17:04:45.039668] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:15:22.257  [2024-12-09 17:04:45.040241] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:15:22.257  
00:15:22.257  [2024-12-09 17:04:45.040278] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:15:23.200  
00:15:23.200  real	0m1.733s
00:15:23.200  user	0m1.291s
00:15:23.200  sys	0m0.283s
00:15:23.200   17:04:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:23.200   17:04:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:15:23.200  ************************************
00:15:23.200  END TEST bdev_hello_world
00:15:23.200  ************************************
00:15:23.200   17:04:45 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:15:23.200   17:04:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:23.200   17:04:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:23.200   17:04:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:23.200  ************************************
00:15:23.200  START TEST bdev_bounds
00:15:23.200  ************************************
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73666
00:15:23.200  Process bdevio pid: 73666
00:15:23.200  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73666'
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73666
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73666 ']'
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:15:23.200   17:04:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:15:23.200  [2024-12-09 17:04:46.062777] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:15:23.200  [2024-12-09 17:04:46.062988] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73666 ]
00:15:23.200  [2024-12-09 17:04:46.230042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:15:23.461  [2024-12-09 17:04:46.388808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:23.461  [2024-12-09 17:04:46.389221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:23.461  [2024-12-09 17:04:46.389289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:24.034   17:04:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:24.034   17:04:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:15:24.035   17:04:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:15:24.035  I/O targets:
00:15:24.035    nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:15:24.035    nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:15:24.035    nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:15:24.035    nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:15:24.035    nvme2n1: 262144 blocks of 4096 bytes (1024 MiB)
00:15:24.035    nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:15:24.035  
00:15:24.035  
00:15:24.035       CUnit - A unit testing framework for C - Version 2.1-3
00:15:24.035       http://cunit.sourceforge.net/
00:15:24.035  
00:15:24.035  
00:15:24.035  Suite: bdevio tests on: nvme3n1
00:15:24.035    Test: blockdev write read block ...passed
00:15:24.035    Test: blockdev write zeroes read block ...passed
00:15:24.035    Test: blockdev write zeroes read no split ...passed
00:15:24.035    Test: blockdev write zeroes read split ...passed
00:15:24.297    Test: blockdev write zeroes read split partial ...passed
00:15:24.297    Test: blockdev reset ...passed
00:15:24.297    Test: blockdev write read 8 blocks ...passed
00:15:24.297    Test: blockdev write read size > 128k ...passed
00:15:24.297    Test: blockdev write read invalid size ...passed
00:15:24.297    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:15:24.297    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:15:24.297    Test: blockdev write read max offset ...passed
00:15:24.297    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:15:24.297    Test: blockdev writev readv 8 blocks ...passed
00:15:24.297    Test: blockdev writev readv 30 x 1block ...passed
00:15:24.297    Test: blockdev writev readv block ...passed
00:15:24.297    Test: blockdev writev readv size > 128k ...passed
00:15:24.297    Test: blockdev writev readv size > 128k in two iovs ...passed
00:15:24.297    Test: blockdev comparev and writev ...passed
00:15:24.297    Test: blockdev nvme passthru rw ...passed
00:15:24.297    Test: blockdev nvme passthru vendor specific ...passed
00:15:24.297    Test: blockdev nvme admin passthru ...passed
00:15:24.297    Test: blockdev copy ...passed
00:15:24.297  Suite: bdevio tests on: nvme2n1
00:15:24.297    Test: blockdev write read block ...passed
00:15:24.297    Test: blockdev write zeroes read block ...passed
00:15:24.297    Test: blockdev write zeroes read no split ...passed
00:15:24.297    Test: blockdev write zeroes read split ...passed
00:15:24.297    Test: blockdev write zeroes read split partial ...passed
00:15:24.297    Test: blockdev reset ...passed
00:15:24.297    Test: blockdev write read 8 blocks ...passed
00:15:24.297    Test: blockdev write read size > 128k ...passed
00:15:24.297    Test: blockdev write read invalid size ...passed
00:15:24.297    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:15:24.297    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:15:24.297    Test: blockdev write read max offset ...passed
00:15:24.297    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:15:24.297    Test: blockdev writev readv 8 blocks ...passed
00:15:24.297    Test: blockdev writev readv 30 x 1block ...passed
00:15:24.297    Test: blockdev writev readv block ...passed
00:15:24.297    Test: blockdev writev readv size > 128k ...passed
00:15:24.297    Test: blockdev writev readv size > 128k in two iovs ...passed
00:15:24.297    Test: blockdev comparev and writev ...passed
00:15:24.297    Test: blockdev nvme passthru rw ...passed
00:15:24.297    Test: blockdev nvme passthru vendor specific ...passed
00:15:24.297    Test: blockdev nvme admin passthru ...passed
00:15:24.297    Test: blockdev copy ...passed
00:15:24.297  Suite: bdevio tests on: nvme1n1
00:15:24.297    Test: blockdev write read block ...passed
00:15:24.297    Test: blockdev write zeroes read block ...passed
00:15:24.297    Test: blockdev write zeroes read no split ...passed
00:15:24.297    Test: blockdev write zeroes read split ...passed
00:15:24.297    Test: blockdev write zeroes read split partial ...passed
00:15:24.297    Test: blockdev reset ...passed
00:15:24.297    Test: blockdev write read 8 blocks ...passed
00:15:24.297    Test: blockdev write read size > 128k ...passed
00:15:24.297    Test: blockdev write read invalid size ...passed
00:15:24.297    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:15:24.297    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:15:24.297    Test: blockdev write read max offset ...passed
00:15:24.297    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:15:24.297    Test: blockdev writev readv 8 blocks ...passed
00:15:24.297    Test: blockdev writev readv 30 x 1block ...passed
00:15:24.297    Test: blockdev writev readv block ...passed
00:15:24.297    Test: blockdev writev readv size > 128k ...passed
00:15:24.297    Test: blockdev writev readv size > 128k in two iovs ...passed
00:15:24.297    Test: blockdev comparev and writev ...passed
00:15:24.297    Test: blockdev nvme passthru rw ...passed
00:15:24.297    Test: blockdev nvme passthru vendor specific ...passed
00:15:24.297    Test: blockdev nvme admin passthru ...passed
00:15:24.297    Test: blockdev copy ...passed
00:15:24.297  Suite: bdevio tests on: nvme0n3
00:15:24.297    Test: blockdev write read block ...passed
00:15:24.297    Test: blockdev write zeroes read block ...passed
00:15:24.297    Test: blockdev write zeroes read no split ...passed
00:15:24.297    Test: blockdev write zeroes read split ...passed
00:15:24.560    Test: blockdev write zeroes read split partial ...passed
00:15:24.560    Test: blockdev reset ...passed
00:15:24.560    Test: blockdev write read 8 blocks ...passed
00:15:24.560    Test: blockdev write read size > 128k ...passed
00:15:24.560    Test: blockdev write read invalid size ...passed
00:15:24.560    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:15:24.560    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:15:24.560    Test: blockdev write read max offset ...passed
00:15:24.560    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:15:24.560    Test: blockdev writev readv 8 blocks ...passed
00:15:24.560    Test: blockdev writev readv 30 x 1block ...passed
00:15:24.560    Test: blockdev writev readv block ...passed
00:15:24.560    Test: blockdev writev readv size > 128k ...passed
00:15:24.560    Test: blockdev writev readv size > 128k in two iovs ...passed
00:15:24.560    Test: blockdev comparev and writev ...passed
00:15:24.560    Test: blockdev nvme passthru rw ...passed
00:15:24.560    Test: blockdev nvme passthru vendor specific ...passed
00:15:24.560    Test: blockdev nvme admin passthru ...passed
00:15:24.560    Test: blockdev copy ...passed
00:15:24.560  Suite: bdevio tests on: nvme0n2
00:15:24.560    Test: blockdev write read block ...passed
00:15:24.560    Test: blockdev write zeroes read block ...passed
00:15:24.560    Test: blockdev write zeroes read no split ...passed
00:15:24.560    Test: blockdev write zeroes read split ...passed
00:15:24.560    Test: blockdev write zeroes read split partial ...passed
00:15:24.560    Test: blockdev reset ...passed
00:15:24.560    Test: blockdev write read 8 blocks ...passed
00:15:24.560    Test: blockdev write read size > 128k ...passed
00:15:24.560    Test: blockdev write read invalid size ...passed
00:15:24.560    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:15:24.560    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:15:24.560    Test: blockdev write read max offset ...passed
00:15:24.560    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:15:24.560    Test: blockdev writev readv 8 blocks ...passed
00:15:24.560    Test: blockdev writev readv 30 x 1block ...passed
00:15:24.560    Test: blockdev writev readv block ...passed
00:15:24.560    Test: blockdev writev readv size > 128k ...passed
00:15:24.560    Test: blockdev writev readv size > 128k in two iovs ...passed
00:15:24.560    Test: blockdev comparev and writev ...passed
00:15:24.560    Test: blockdev nvme passthru rw ...passed
00:15:24.560    Test: blockdev nvme passthru vendor specific ...passed
00:15:24.561    Test: blockdev nvme admin passthru ...passed
00:15:24.561    Test: blockdev copy ...passed
00:15:24.561  Suite: bdevio tests on: nvme0n1
00:15:24.561    Test: blockdev write read block ...passed
00:15:24.561    Test: blockdev write zeroes read block ...passed
00:15:24.561    Test: blockdev write zeroes read no split ...passed
00:15:24.561    Test: blockdev write zeroes read split ...passed
00:15:24.561    Test: blockdev write zeroes read split partial ...passed
00:15:24.561    Test: blockdev reset ...passed
00:15:24.561    Test: blockdev write read 8 blocks ...passed
00:15:24.561    Test: blockdev write read size > 128k ...passed
00:15:24.561    Test: blockdev write read invalid size ...passed
00:15:24.561    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:15:24.561    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:15:24.561    Test: blockdev write read max offset ...passed
00:15:24.561    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:15:24.561    Test: blockdev writev readv 8 blocks ...passed
00:15:24.561    Test: blockdev writev readv 30 x 1block ...passed
00:15:24.561    Test: blockdev writev readv block ...passed
00:15:24.561    Test: blockdev writev readv size > 128k ...passed
00:15:24.561    Test: blockdev writev readv size > 128k in two iovs ...passed
00:15:24.561    Test: blockdev comparev and writev ...passed
00:15:24.561    Test: blockdev nvme passthru rw ...passed
00:15:24.561    Test: blockdev nvme passthru vendor specific ...passed
00:15:24.561    Test: blockdev nvme admin passthru ...passed
00:15:24.561    Test: blockdev copy ...passed
00:15:24.561  
00:15:24.561  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:15:24.561                suites      6      6    n/a      0        0
00:15:24.561                 tests    138    138    138      0        0
00:15:24.561               asserts    780    780    780      0      n/a
00:15:24.561  
00:15:24.561  Elapsed time =    1.285 seconds
00:15:24.561  0
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73666
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73666 ']'
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73666
00:15:24.561    17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:24.561    17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73666
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73666'
00:15:24.561  killing process with pid 73666
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73666
00:15:24.561   17:04:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73666
00:15:25.506   17:04:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:15:25.506  
00:15:25.506  real	0m2.490s
00:15:25.506  user	0m5.839s
00:15:25.506  sys	0m0.470s
00:15:25.506   17:04:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:25.506   17:04:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:15:25.506  ************************************
00:15:25.506  END TEST bdev_bounds
00:15:25.506  ************************************
00:15:25.506   17:04:48 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' ''
00:15:25.506   17:04:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:15:25.506   17:04:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:25.506   17:04:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:25.767  ************************************
00:15:25.767  START TEST bdev_nbd
00:15:25.767  ************************************
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' ''
00:15:25.767    17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6
00:15:25.767   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73720
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73720 /var/tmp/spdk-nbd.sock
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73720 ']'
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:25.768  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:25.768   17:04:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:15:25.768  [2024-12-09 17:04:48.644926] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:15:25.768  [2024-12-09 17:04:48.645276] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:26.029  [2024-12-09 17:04:48.812378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:26.029  [2024-12-09 17:04:48.957614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1'
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1'
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:15:26.604   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:26.604    17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:15:26.866    17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:26.866  1+0 records in
00:15:26.866  1+0 records out
00:15:26.866  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986823 s, 4.2 MB/s
00:15:26.866    17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:15:26.866   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:26.866    17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2
00:15:27.129   17:04:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:15:27.129    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:27.129  1+0 records in
00:15:27.129  1+0 records out
00:15:27.129  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011916 s, 3.4 MB/s
00:15:27.129    17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:15:27.129   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:27.129    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:15:27.391    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:27.391  1+0 records in
00:15:27.391  1+0 records out
00:15:27.391  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858568 s, 4.8 MB/s
00:15:27.391    17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:15:27.391   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:27.391    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:15:27.653    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:27.653  1+0 records in
00:15:27.653  1+0 records out
00:15:27.653  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109712 s, 3.7 MB/s
00:15:27.653    17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:15:27.653   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:27.653    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:15:27.915    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:27.915  1+0 records in
00:15:27.915  1+0 records out
00:15:27.915  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000999831 s, 4.1 MB/s
00:15:27.915    17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:15:27.915   17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:27.915    17:04:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:15:28.177    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:28.177  1+0 records in
00:15:28.177  1+0 records out
00:15:28.177  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149134 s, 2.7 MB/s
00:15:28.177    17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:15:28.177   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:15:28.177    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:15:28.438   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:15:28.438    {
00:15:28.438      "nbd_device": "/dev/nbd0",
00:15:28.438      "bdev_name": "nvme0n1"
00:15:28.438    },
00:15:28.438    {
00:15:28.438      "nbd_device": "/dev/nbd1",
00:15:28.438      "bdev_name": "nvme0n2"
00:15:28.438    },
00:15:28.438    {
00:15:28.438      "nbd_device": "/dev/nbd2",
00:15:28.438      "bdev_name": "nvme0n3"
00:15:28.438    },
00:15:28.438    {
00:15:28.438      "nbd_device": "/dev/nbd3",
00:15:28.438      "bdev_name": "nvme1n1"
00:15:28.438    },
00:15:28.438    {
00:15:28.438      "nbd_device": "/dev/nbd4",
00:15:28.438      "bdev_name": "nvme2n1"
00:15:28.438    },
00:15:28.438    {
00:15:28.438      "nbd_device": "/dev/nbd5",
00:15:28.438      "bdev_name": "nvme3n1"
00:15:28.438    }
00:15:28.438  ]'
00:15:28.438   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:15:28.438    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:15:28.438    {
00:15:28.439      "nbd_device": "/dev/nbd0",
00:15:28.439      "bdev_name": "nvme0n1"
00:15:28.439    },
00:15:28.439    {
00:15:28.439      "nbd_device": "/dev/nbd1",
00:15:28.439      "bdev_name": "nvme0n2"
00:15:28.439    },
00:15:28.439    {
00:15:28.439      "nbd_device": "/dev/nbd2",
00:15:28.439      "bdev_name": "nvme0n3"
00:15:28.439    },
00:15:28.439    {
00:15:28.439      "nbd_device": "/dev/nbd3",
00:15:28.439      "bdev_name": "nvme1n1"
00:15:28.439    },
00:15:28.439    {
00:15:28.439      "nbd_device": "/dev/nbd4",
00:15:28.439      "bdev_name": "nvme2n1"
00:15:28.439    },
00:15:28.439    {
00:15:28.439      "nbd_device": "/dev/nbd5",
00:15:28.439      "bdev_name": "nvme3n1"
00:15:28.439    }
00:15:28.439  ]'
00:15:28.439    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5'
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5')
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:28.439   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:15:28.701    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:28.701   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:15:28.964    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:15:28.964    17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:28.964   17:04:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:15:29.226    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:29.226   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:15:29.487    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:29.487   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:15:29.748    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:29.748   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:29.748    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:15:29.748    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:29.748     17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:15:30.010    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:15:30.010     17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:15:30.010     17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:15:30.010    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:15:30.010     17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:15:30.010     17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:15:30.010     17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:15:30.010    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:15:30.010    17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:30.010   17:04:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0
00:15:30.271  /dev/nbd0
00:15:30.271    17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:30.271  1+0 records in
00:15:30.271  1+0 records out
00:15:30.271  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667603 s, 6.1 MB/s
00:15:30.271    17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:30.271   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1
00:15:30.533  /dev/nbd1
00:15:30.533    17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:30.533  1+0 records in
00:15:30.533  1+0 records out
00:15:30.533  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532396 s, 7.7 MB/s
00:15:30.533    17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:30.533   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10
00:15:30.795  /dev/nbd10
00:15:30.795    17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:30.795  1+0 records in
00:15:30.795  1+0 records out
00:15:30.795  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569614 s, 7.2 MB/s
00:15:30.795    17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:30.795   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11
00:15:31.057  /dev/nbd11
00:15:31.057    17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:31.057  1+0 records in
00:15:31.057  1+0 records out
00:15:31.057  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101925 s, 4.0 MB/s
00:15:31.057    17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:31.057   17:04:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12
00:15:31.057  /dev/nbd12
00:15:31.057    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:15:31.057   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:15:31.057   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:15:31.057   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:31.057   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:31.057   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:31.057   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:31.319  1+0 records in
00:15:31.319  1+0 records out
00:15:31.319  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506961 s, 8.1 MB/s
00:15:31.319    17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13
00:15:31.319  /dev/nbd13
00:15:31.319    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:31.319  1+0 records in
00:15:31.319  1+0 records out
00:15:31.319  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816782 s, 5.0 MB/s
00:15:31.319    17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:31.319   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:15:31.319    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:15:31.319    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:31.319     17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:15:31.581    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd0",
00:15:31.581      "bdev_name": "nvme0n1"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd1",
00:15:31.581      "bdev_name": "nvme0n2"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd10",
00:15:31.581      "bdev_name": "nvme0n3"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd11",
00:15:31.581      "bdev_name": "nvme1n1"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd12",
00:15:31.581      "bdev_name": "nvme2n1"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd13",
00:15:31.581      "bdev_name": "nvme3n1"
00:15:31.581    }
00:15:31.581  ]'
00:15:31.581     17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:15:31.581     17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd0",
00:15:31.581      "bdev_name": "nvme0n1"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd1",
00:15:31.581      "bdev_name": "nvme0n2"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd10",
00:15:31.581      "bdev_name": "nvme0n3"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd11",
00:15:31.581      "bdev_name": "nvme1n1"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd12",
00:15:31.581      "bdev_name": "nvme2n1"
00:15:31.581    },
00:15:31.581    {
00:15:31.581      "nbd_device": "/dev/nbd13",
00:15:31.581      "bdev_name": "nvme3n1"
00:15:31.581    }
00:15:31.581  ]'
00:15:31.581    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:15:31.581  /dev/nbd1
00:15:31.581  /dev/nbd10
00:15:31.581  /dev/nbd11
00:15:31.581  /dev/nbd12
00:15:31.581  /dev/nbd13'
00:15:31.581     17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:15:31.581  /dev/nbd1
00:15:31.581  /dev/nbd10
00:15:31.581  /dev/nbd11
00:15:31.581  /dev/nbd12
00:15:31.581  /dev/nbd13'
00:15:31.581     17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:15:31.581    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6
00:15:31.581    17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']'
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:15:31.581  256+0 records in
00:15:31.581  256+0 records out
00:15:31.581  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00772731 s, 136 MB/s
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:15:31.581   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:15:31.843  256+0 records in
00:15:31.843  256+0 records out
00:15:31.843  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188055 s, 5.6 MB/s
00:15:31.843   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:15:31.843   17:04:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:15:32.104  256+0 records in
00:15:32.104  256+0 records out
00:15:32.104  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223184 s, 4.7 MB/s
00:15:32.104   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:15:32.104   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:15:32.366  256+0 records in
00:15:32.366  256+0 records out
00:15:32.366  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199215 s, 5.3 MB/s
00:15:32.366   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:15:32.366   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:15:32.628  256+0 records in
00:15:32.628  256+0 records out
00:15:32.628  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241763 s, 4.3 MB/s
00:15:32.628   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:15:32.628   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:15:32.890  256+0 records in
00:15:32.890  256+0 records out
00:15:32.890  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247544 s, 4.2 MB/s
00:15:32.890   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:15:32.890   17:04:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:15:33.155  256+0 records in
00:15:33.155  256+0 records out
00:15:33.155  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.309142 s, 3.4 MB/s
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:33.155   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:15:33.417    17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:33.417   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:15:33.678    17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:33.678   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:15:33.939    17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:15:33.939    17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:33.939   17:04:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:15:34.200    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:15:34.200   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:34.201   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:15:34.463    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:34.463   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:34.463    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:15:34.463    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:34.463     17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:15:34.725    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:15:34.725     17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:15:34.725     17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:15:34.725    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:15:34.725     17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:15:34.725     17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:15:34.725     17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:15:34.725    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:15:34.725    17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:15:34.725   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:15:34.986  malloc_lvol_verify
00:15:34.986   17:04:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:15:35.246  b7c1ab19-d22a-4687-b517-94270adccc48
00:15:35.246   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:15:35.246  9eafba47-e48a-40ee-a93e-32fe0a495542
00:15:35.246   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:15:35.508  /dev/nbd0
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:15:35.508  mke2fs 1.47.0 (5-Feb-2023)
00:15:35.508  Discarding device blocks:    0/4096         done                            
00:15:35.508  Creating filesystem with 4096 1k blocks and 1024 inodes
00:15:35.508  
00:15:35.508  Allocating group tables: 0/1   done                            
00:15:35.508  Writing inode tables: 0/1   done                            
00:15:35.508  Creating journal (1024 blocks): done
00:15:35.508  Writing superblocks and filesystem accounting information: 0/1   done
00:15:35.508  
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:35.508   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:15:35.769    17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73720
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73720 ']'
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73720
00:15:35.769    17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:35.769    17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73720
00:15:35.769  killing process with pid 73720
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73720'
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73720
00:15:35.769   17:04:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73720
00:15:36.713   17:04:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:15:36.713  
00:15:36.713  real	0m10.999s
00:15:36.713  user	0m14.612s
00:15:36.713  sys	0m3.878s
00:15:36.713   17:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:36.713   17:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:15:36.713  ************************************
00:15:36.713  END TEST bdev_nbd
00:15:36.713  ************************************
00:15:36.713   17:04:59 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:15:36.713   17:04:59 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']'
00:15:36.713   17:04:59 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']'
00:15:36.713   17:04:59 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite ''
00:15:36.713   17:04:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:36.713   17:04:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:36.713   17:04:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:36.713  ************************************
00:15:36.713  START TEST bdev_fio
00:15:36.713  ************************************
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:15:36.713  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:15:36.713    17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:15:36.713    17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:15:36.713    17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]'
00:15:36.713   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:15:36.714  ************************************
00:15:36.714  START TEST bdev_fio_rw_verify
00:15:36.714  ************************************
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:36.714    17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:36.714    17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:15:36.714    17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:36.714   17:04:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:15:37.038  job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:15:37.038  job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:15:37.038  job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:15:37.038  job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:15:37.038  job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:15:37.038  job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:15:37.038  fio-3.35
00:15:37.038  Starting 6 threads
00:15:49.320  
00:15:49.320  job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74130: Mon Dec  9 17:05:10 2024
00:15:49.320    read: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(546MiB/10002msec)
00:15:49.320      slat (usec): min=2, max=1731, avg= 7.37, stdev=17.64
00:15:49.320      clat (usec): min=79, max=768691, avg=1379.82, stdev=5853.59
00:15:49.320       lat (usec): min=83, max=768703, avg=1387.18, stdev=5853.68
00:15:49.320      clat percentiles (usec):
00:15:49.320       | 50.000th=[  1237], 99.000th=[  3720], 99.900th=[  5014],
00:15:49.320       | 99.990th=[  6521], 99.999th=[767558]
00:15:49.320    write: IOPS=14.2k, BW=55.7MiB/s (58.4MB/s)(557MiB/10002msec); 0 zone resets
00:15:49.320      slat (usec): min=13, max=5918, avg=44.08, stdev=148.58
00:15:49.320      clat (usec): min=87, max=85917, avg=1680.81, stdev=1471.63
00:15:49.320       lat (usec): min=106, max=85948, avg=1724.89, stdev=1480.01
00:15:49.320      clat percentiles (usec):
00:15:49.320       | 50.000th=[ 1532], 99.000th=[ 4293], 99.900th=[ 5604], 99.990th=[84411],
00:15:49.320       | 99.999th=[85459]
00:15:49.320     bw (  KiB/s): min=41753, max=89208, per=100.00%, avg=57162.84, stdev=2034.69, samples=114
00:15:49.320     iops        : min=10435, max=22302, avg=14289.63, stdev=508.71, samples=114
00:15:49.320    lat (usec)   : 100=0.01%, 250=1.91%, 500=6.90%, 750=8.99%, 1000=11.50%
00:15:49.320    lat (msec)   : 2=47.79%, 4=21.80%, 10=1.10%, 50=0.01%, 100=0.01%
00:15:49.320    lat (msec)   : 1000=0.01%
00:15:49.320    cpu          : usr=42.68%, sys=33.27%, ctx=4745, majf=0, minf=14345
00:15:49.320    IO depths    : 1=10.8%, 2=23.2%, 4=51.7%, 8=14.4%, 16=0.0%, 32=0.0%, >=64=0.0%
00:15:49.320       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:49.320       complete  : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:49.320       issued rwts: total=139864,142497,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:49.320       latency   : target=0, window=0, percentile=100.00%, depth=8
00:15:49.320  
00:15:49.320  Run status group 0 (all jobs):
00:15:49.320     READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=546MiB (573MB), run=10002-10002msec
00:15:49.320    WRITE: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=557MiB (584MB), run=10002-10002msec
00:15:49.320  -----------------------------------------------------
00:15:49.320  Suppressions used:
00:15:49.320    count      bytes template
00:15:49.320        6         48 /usr/src/fio/parse.c
00:15:49.320     2532     243072 /usr/src/fio/iolog.c
00:15:49.320        1          8 libtcmalloc_minimal.so
00:15:49.320        1        904 libcrypto.so
00:15:49.320  -----------------------------------------------------
00:15:49.320  
00:15:49.320  
00:15:49.320  real	0m12.222s
00:15:49.320  user	0m27.279s
00:15:49.320  sys	0m20.372s
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:15:49.320  ************************************
00:15:49.320  END TEST bdev_fio_rw_verify
00:15:49.320  ************************************
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:15:49.320   17:05:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:15:49.320    17:05:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:15:49.321    17:05:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "nvme0n1",' '  "aliases": [' '    "c7f1eb36-c017-4680-b9f4-19a2be785b8f"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "c7f1eb36-c017-4680-b9f4-19a2be785b8f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n2",' '  "aliases": [' '    "dca67072-ce97-4774-8815-b908bb1addba"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "dca67072-ce97-4774-8815-b908bb1addba",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n3",' '  "aliases": [' '    "f35b22cf-1fe6-425d-98a3-4b091a914fac"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "f35b22cf-1fe6-425d-98a3-4b091a914fac",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme1n1",' '  "aliases": [' '    "c194976f-16a9-4e6f-8bce-94138842a85e"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "c194976f-16a9-4e6f-8bce-94138842a85e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme2n1",' '  "aliases": [' '    "4efb12c4-1ce8-44ef-bfaa-32b6211d88f1"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "4efb12c4-1ce8-44ef-bfaa-32b6211d88f1",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme3n1",' '  "aliases": [' '    "5fde65bb-8a1f-452b-868e-2052cdf0a15d"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "5fde65bb-8a1f-452b-868e-2052cdf0a15d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}'
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]]
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:15:49.321  /home/vagrant/spdk_repo/spdk
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0
00:15:49.321  
00:15:49.321  real	0m12.400s
00:15:49.321  user	0m27.365s
00:15:49.321  sys	0m20.446s
00:15:49.321  ************************************
00:15:49.321  END TEST bdev_fio
00:15:49.321  ************************************
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:49.321   17:05:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:15:49.321   17:05:12 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:15:49.321   17:05:12 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:15:49.321   17:05:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:15:49.321   17:05:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:49.321   17:05:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:49.321  ************************************
00:15:49.321  START TEST bdev_verify
00:15:49.321  ************************************
00:15:49.321   17:05:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:15:49.321  [2024-12-09 17:05:12.163255] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:15:49.321  [2024-12-09 17:05:12.163403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74304 ]
00:15:49.321  [2024-12-09 17:05:12.326036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:49.582  [2024-12-09 17:05:12.474370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:49.582  [2024-12-09 17:05:12.474464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:50.148  Running I/O for 5 seconds...
00:15:52.088      23232.00 IOPS,    90.75 MiB/s
[2024-12-09T17:05:16.502Z]     23280.00 IOPS,    90.94 MiB/s
[2024-12-09T17:05:17.437Z]     23721.67 IOPS,    92.66 MiB/s
[2024-12-09T17:05:18.376Z]     23568.75 IOPS,    92.07 MiB/s
[2024-12-09T17:05:18.376Z]     23251.80 IOPS,    90.83 MiB/s
00:15:55.335                                                                                                  Latency(us)
00:15:55.335  
[2024-12-09T17:05:18.376Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:55.335  Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x0 length 0x80000
00:15:55.335  	 nvme0n1             :       5.08    1737.20       6.79       0.00     0.00   73545.17    7461.02   75013.51
00:15:55.335  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x80000 length 0x80000
00:15:55.335  	 nvme0n1             :       5.07    1716.81       6.71       0.00     0.00   74418.59    5797.42   66140.95
00:15:55.335  Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x0 length 0x80000
00:15:55.335  	 nvme0n2             :       5.07    1718.17       6.71       0.00     0.00   74173.13   15728.64   64527.75
00:15:55.335  Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x80000 length 0x80000
00:15:55.335  	 nvme0n2             :       5.09    1711.65       6.69       0.00     0.00   74484.06   10838.65   57268.38
00:15:55.335  Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x0 length 0x80000
00:15:55.335  	 nvme0n3             :       5.09    1734.13       6.77       0.00     0.00   73334.08    9376.69   67754.14
00:15:55.335  Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x80000 length 0x80000
00:15:55.335  	 nvme0n3             :       5.09    1711.17       6.68       0.00     0.00   74352.86   12401.43   66947.54
00:15:55.335  Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x0 length 0xa0000
00:15:55.335  	 nvme1n1             :       5.10    1733.11       6.77       0.00     0.00   73199.25   11544.42   64527.75
00:15:55.335  Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0xa0000 length 0xa0000
00:15:55.335  	 nvme1n1             :       5.08    1714.33       6.70       0.00     0.00   74071.90   13107.20   62107.96
00:15:55.335  Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x0 length 0x20000
00:15:55.335  	 nvme2n1             :       5.10    1732.63       6.77       0.00     0.00   73061.96   11494.01   63721.16
00:15:55.335  Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x20000 length 0x20000
00:15:55.335  	 nvme2n1             :       5.08    1713.83       6.69       0.00     0.00   73941.53    9376.69   67350.84
00:15:55.335  Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0x0 length 0xbd0bd
00:15:55.335  	 nvme3n1             :       5.09    2884.87      11.27       0.00     0.00   43749.64    4940.41   64931.05
00:15:55.335  Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:15:55.335  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:15:55.335  	 nvme3n1             :       5.08    2888.20      11.28       0.00     0.00   43733.69    4058.19   64527.75
00:15:55.335  
[2024-12-09T17:05:18.376Z]  ===================================================================================================================
00:15:55.335  
[2024-12-09T17:05:18.376Z]  Total                       :              22996.08      89.83       0.00     0.00   66295.10    4058.19   75013.51
00:15:56.272  
00:15:56.272  real	0m6.897s
00:15:56.272  user	0m11.000s
00:15:56.272  sys	0m1.736s
00:15:56.272  ************************************
00:15:56.272  END TEST bdev_verify
00:15:56.272  ************************************
00:15:56.272   17:05:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:56.272   17:05:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:15:56.272   17:05:19 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:15:56.272   17:05:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:15:56.272   17:05:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:56.272   17:05:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:56.272  ************************************
00:15:56.272  START TEST bdev_verify_big_io
00:15:56.272  ************************************
00:15:56.272   17:05:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:15:56.272  [2024-12-09 17:05:19.143903] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:15:56.272  [2024-12-09 17:05:19.144073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74406 ]
00:15:56.532  [2024-12-09 17:05:19.312353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:56.532  [2024-12-09 17:05:19.466242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:56.532  [2024-12-09 17:05:19.466331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:57.104  Running I/O for 5 seconds...
00:16:03.695       1024.00 IOPS,    64.00 MiB/s
[2024-12-09T17:05:26.994Z]      2784.00 IOPS,   174.00 MiB/s
[2024-12-09T17:05:27.253Z]      2988.67 IOPS,   186.79 MiB/s
00:16:04.212                                                                                                  Latency(us)
00:16:04.212  
[2024-12-09T17:05:27.253Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:04.212  Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x0 length 0x8000
00:16:04.212  	 nvme0n1             :       6.07     126.43       7.90       0.00     0.00  969409.23    5142.06 1819682.66
00:16:04.212  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x8000 length 0x8000
00:16:04.212  	 nvme0n1             :       6.08     144.72       9.05       0.00     0.00  853084.86  113730.17 1090519.04
00:16:04.212  Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x0 length 0x8000
00:16:04.212  	 nvme0n2             :       6.08      93.47       5.84       0.00     0.00 1259736.12  106470.79 1819682.66
00:16:04.212  Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x8000 length 0x8000
00:16:04.212  	 nvme0n2             :       6.08     136.79       8.55       0.00     0.00  857498.42    9427.10  903388.55
00:16:04.212  Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x0 length 0x8000
00:16:04.212  	 nvme0n3             :       6.08      94.75       5.92       0.00     0.00 1205548.72  197616.25 2271376.94
00:16:04.212  Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x8000 length 0x8000
00:16:04.212  	 nvme0n3             :       6.07     113.32       7.08       0.00     0.00 1022093.13  142767.66 2000360.37
00:16:04.212  Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x0 length 0xa000
00:16:04.212  	 nvme1n1             :       5.96      91.34       5.71       0.00     0.00 1223069.31  189550.28 2297188.04
00:16:04.212  Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0xa000 length 0xa000
00:16:04.212  	 nvme1n1             :       6.07     142.22       8.89       0.00     0.00  786576.00   91952.05 1271196.75
00:16:04.212  Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x0 length 0x2000
00:16:04.212  	 nvme2n1             :       6.07     115.94       7.25       0.00     0.00  944044.36   69367.34 1632552.17
00:16:04.212  Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x2000 length 0x2000
00:16:04.212  	 nvme2n1             :       6.60     113.21       7.08       0.00     0.00  942449.12  124215.93 2168132.53
00:16:04.212  Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0x0 length 0xbd0b
00:16:04.212  	 nvme3n1             :       7.10     117.23       7.33       0.00     0.00  859897.57     228.43 1238932.87
00:16:04.212  Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:16:04.212  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:16:04.212  	 nvme3n1             :       7.10     144.22       9.01       0.00     0.00  727466.90     153.60 1277649.53
00:16:04.212  
[2024-12-09T17:05:27.253Z]  ===================================================================================================================
00:16:04.212  
[2024-12-09T17:05:27.253Z]  Total                       :               1433.64      89.60       0.00     0.00  941144.88     153.60 2297188.04
00:16:05.146  
00:16:05.146  real	0m8.867s
00:16:05.146  user	0m16.359s
00:16:05.146  sys	0m0.507s
00:16:05.146   17:05:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:05.146   17:05:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:16:05.146  ************************************
00:16:05.146  END TEST bdev_verify_big_io
00:16:05.146  ************************************
00:16:05.146   17:05:27 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:16:05.146   17:05:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:16:05.146   17:05:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:05.146   17:05:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:05.146  ************************************
00:16:05.146  START TEST bdev_write_zeroes
00:16:05.146  ************************************
00:16:05.146   17:05:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:16:05.146  [2024-12-09 17:05:28.027469] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:05.146  [2024-12-09 17:05:28.027592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74528 ]
00:16:05.146  [2024-12-09 17:05:28.179020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:05.405  [2024-12-09 17:05:28.268088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:05.665  Running I/O for 1 seconds...
00:16:06.607      75342.00 IOPS,   294.30 MiB/s
00:16:06.607                                                                                                  Latency(us)
00:16:06.607  
[2024-12-09T17:05:29.648Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:06.607  Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:16:06.607  	 nvme0n1             :       1.02   11677.10      45.61       0.00     0.00   10951.52    4032.98   24197.91
00:16:06.607  Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:16:06.607  	 nvme0n2             :       1.02   11663.98      45.56       0.00     0.00   10956.40    5242.88   24197.91
00:16:06.607  Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:16:06.607  	 nvme0n3             :       1.03   11694.36      45.68       0.00     0.00   10920.83    3302.01   24601.21
00:16:06.607  Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:16:06.607  	 nvme1n1             :       1.03   11599.65      45.31       0.00     0.00   11003.19    5142.06   28634.19
00:16:06.607  Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:16:06.607  	 nvme2n1             :       1.03   11556.99      45.14       0.00     0.00   11037.15    5091.64   28432.54
00:16:06.607  Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:16:06.607  	 nvme3n1             :       1.03   16268.69      63.55       0.00     0.00    7834.22    3087.75   23794.61
00:16:06.607  
[2024-12-09T17:05:29.648Z]  ===================================================================================================================
00:16:06.607  
[2024-12-09T17:05:29.648Z]  Total                       :              74460.77     290.86       0.00     0.00   10286.41    3087.75   28634.19
00:16:07.548  
00:16:07.548  real	0m2.560s
00:16:07.548  user	0m1.881s
00:16:07.548  sys	0m0.513s
00:16:07.548   17:05:30 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:07.548  ************************************
00:16:07.548  END TEST bdev_write_zeroes
00:16:07.548   17:05:30 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:16:07.548  ************************************
00:16:07.548   17:05:30 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:16:07.548   17:05:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:16:07.548   17:05:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:07.548   17:05:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:07.809  ************************************
00:16:07.809  START TEST bdev_json_nonenclosed
00:16:07.809  ************************************
00:16:07.809   17:05:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:16:07.809  [2024-12-09 17:05:30.677176] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:07.809  [2024-12-09 17:05:30.677347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74576 ]
00:16:07.809  [2024-12-09 17:05:30.841344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:08.105  [2024-12-09 17:05:30.971127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:08.105  [2024-12-09 17:05:30.971259] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:16:08.105  [2024-12-09 17:05:30.971281] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:16:08.105  [2024-12-09 17:05:30.971292] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:16:08.365  
00:16:08.365  real	0m0.584s
00:16:08.365  user	0m0.347s
00:16:08.365  sys	0m0.131s
00:16:08.365   17:05:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:08.365  ************************************
00:16:08.365  END TEST bdev_json_nonenclosed
00:16:08.365  ************************************
00:16:08.365   17:05:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:16:08.365   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:16:08.365   17:05:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:16:08.365   17:05:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:08.365   17:05:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:08.365  ************************************
00:16:08.365  START TEST bdev_json_nonarray
00:16:08.366  ************************************
00:16:08.366   17:05:31 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:16:08.366  [2024-12-09 17:05:31.319428] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:08.366  [2024-12-09 17:05:31.319677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74604 ]
00:16:08.627  [2024-12-09 17:05:31.490734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:08.627  [2024-12-09 17:05:31.630917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:08.627  [2024-12-09 17:05:31.631056] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:16:08.627  [2024-12-09 17:05:31.631078] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:16:08.627  [2024-12-09 17:05:31.631091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:16:08.887  
00:16:08.887  real	0m0.607s
00:16:08.888  user	0m0.368s
00:16:08.888  sys	0m0.130s
00:16:08.888   17:05:31 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:08.888   17:05:31 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:16:08.888  ************************************
00:16:08.888  END TEST bdev_json_nonarray
00:16:08.888  ************************************
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]]
00:16:08.888   17:05:31 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:16:09.458  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:16:27.545  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:16:27.545  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:16:27.545  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:16:27.545  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:16:27.545  
00:16:27.545  real	1m7.932s
00:16:27.545  user	1m24.376s
00:16:27.545  sys	1m7.350s
00:16:27.545   17:05:47 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:27.545   17:05:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:27.545  ************************************
00:16:27.545  END TEST blockdev_xnvme
00:16:27.545  ************************************
00:16:27.545   17:05:47  -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh
00:16:27.545   17:05:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:27.545   17:05:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:27.545   17:05:47  -- common/autotest_common.sh@10 -- # set +x
00:16:27.545  ************************************
00:16:27.545  START TEST ublk
00:16:27.545  ************************************
00:16:27.545   17:05:47 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh
00:16:27.545  * Looking for test storage...
00:16:27.545  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:27.545     17:05:47 ublk -- common/autotest_common.sh@1711 -- # lcov --version
00:16:27.545     17:05:47 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:27.545    17:05:47 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:27.545    17:05:47 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:27.545    17:05:47 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:27.545    17:05:47 ublk -- scripts/common.sh@336 -- # IFS=.-:
00:16:27.545    17:05:47 ublk -- scripts/common.sh@336 -- # read -ra ver1
00:16:27.545    17:05:47 ublk -- scripts/common.sh@337 -- # IFS=.-:
00:16:27.545    17:05:47 ublk -- scripts/common.sh@337 -- # read -ra ver2
00:16:27.545    17:05:47 ublk -- scripts/common.sh@338 -- # local 'op=<'
00:16:27.545    17:05:47 ublk -- scripts/common.sh@340 -- # ver1_l=2
00:16:27.545    17:05:47 ublk -- scripts/common.sh@341 -- # ver2_l=1
00:16:27.545    17:05:47 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:27.545    17:05:47 ublk -- scripts/common.sh@344 -- # case "$op" in
00:16:27.545    17:05:47 ublk -- scripts/common.sh@345 -- # : 1
00:16:27.545    17:05:47 ublk -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:27.545    17:05:47 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:27.545     17:05:47 ublk -- scripts/common.sh@365 -- # decimal 1
00:16:27.545     17:05:47 ublk -- scripts/common.sh@353 -- # local d=1
00:16:27.545     17:05:47 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:27.545     17:05:47 ublk -- scripts/common.sh@355 -- # echo 1
00:16:27.545    17:05:47 ublk -- scripts/common.sh@365 -- # ver1[v]=1
00:16:27.545     17:05:47 ublk -- scripts/common.sh@366 -- # decimal 2
00:16:27.545     17:05:47 ublk -- scripts/common.sh@353 -- # local d=2
00:16:27.545     17:05:47 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:27.545     17:05:47 ublk -- scripts/common.sh@355 -- # echo 2
00:16:27.545    17:05:47 ublk -- scripts/common.sh@366 -- # ver2[v]=2
00:16:27.545    17:05:47 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:27.545    17:05:47 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:27.545    17:05:47 ublk -- scripts/common.sh@368 -- # return 0
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:27.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.545  		--rc genhtml_branch_coverage=1
00:16:27.545  		--rc genhtml_function_coverage=1
00:16:27.545  		--rc genhtml_legend=1
00:16:27.545  		--rc geninfo_all_blocks=1
00:16:27.545  		--rc geninfo_unexecuted_blocks=1
00:16:27.545  		
00:16:27.545  		'
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:27.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.545  		--rc genhtml_branch_coverage=1
00:16:27.545  		--rc genhtml_function_coverage=1
00:16:27.545  		--rc genhtml_legend=1
00:16:27.545  		--rc geninfo_all_blocks=1
00:16:27.545  		--rc geninfo_unexecuted_blocks=1
00:16:27.545  		
00:16:27.545  		'
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:27.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.545  		--rc genhtml_branch_coverage=1
00:16:27.545  		--rc genhtml_function_coverage=1
00:16:27.545  		--rc genhtml_legend=1
00:16:27.545  		--rc geninfo_all_blocks=1
00:16:27.545  		--rc geninfo_unexecuted_blocks=1
00:16:27.545  		
00:16:27.545  		'
00:16:27.545    17:05:47 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:27.545  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:27.545  		--rc genhtml_branch_coverage=1
00:16:27.545  		--rc genhtml_function_coverage=1
00:16:27.545  		--rc genhtml_legend=1
00:16:27.545  		--rc geninfo_all_blocks=1
00:16:27.545  		--rc geninfo_unexecuted_blocks=1
00:16:27.545  		
00:16:27.545  		'
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh
00:16:27.545    17:05:47 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128
00:16:27.545    17:05:47 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512
00:16:27.545    17:05:47 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400
00:16:27.545    17:05:47 ublk -- lvol/common.sh@9 -- # AIO_BS=4096
00:16:27.545    17:05:47 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4
00:16:27.545    17:05:47 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304
00:16:27.545    17:05:47 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124
00:16:27.545    17:05:47 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]]
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv
00:16:27.545   17:05:47 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config
00:16:27.545   17:05:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:27.545   17:05:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:27.545   17:05:47 ublk -- common/autotest_common.sh@10 -- # set +x
00:16:27.545  ************************************
00:16:27.545  START TEST test_save_ublk_config
00:16:27.545  ************************************
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=74911
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT
00:16:27.545  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 74911
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 74911 ']'
00:16:27.545   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:27.546   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:27.546   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:27.546   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:27.546   17:05:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:16:27.546  [2024-12-09 17:05:47.773613] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:27.546  [2024-12-09 17:05:47.773873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74911 ]
00:16:27.546  [2024-12-09 17:05:47.932513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:27.546  [2024-12-09 17:05:48.030968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:16:27.546  [2024-12-09 17:05:48.641868] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:16:27.546  [2024-12-09 17:05:48.642639] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:16:27.546  malloc0
00:16:27.546  [2024-12-09 17:05:48.705974] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128
00:16:27.546  [2024-12-09 17:05:48.706049] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0
00:16:27.546  [2024-12-09 17:05:48.706060] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:16:27.546  [2024-12-09 17:05:48.706067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:16:27.546  [2024-12-09 17:05:48.714931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:27.546  [2024-12-09 17:05:48.714953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:27.546  [2024-12-09 17:05:48.721876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:27.546  [2024-12-09 17:05:48.721964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:16:27.546  [2024-12-09 17:05:48.738873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:16:27.546  0
00:16:27.546   17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:27.546    17:05:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config
00:16:27.546    17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:27.546    17:05:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:16:27.546    17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:27.546   17:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{
00:16:27.546  "subsystems": [
00:16:27.546  {
00:16:27.546  "subsystem": "fsdev",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.546  "method": "fsdev_set_opts",
00:16:27.546  "params": {
00:16:27.546  "fsdev_io_pool_size": 65535,
00:16:27.546  "fsdev_io_cache_size": 256
00:16:27.546  }
00:16:27.546  }
00:16:27.546  ]
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "keyring",
00:16:27.546  "config": []
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "iobuf",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.546  "method": "iobuf_set_options",
00:16:27.546  "params": {
00:16:27.546  "small_pool_count": 8192,
00:16:27.546  "large_pool_count": 1024,
00:16:27.546  "small_bufsize": 8192,
00:16:27.546  "large_bufsize": 135168,
00:16:27.546  "enable_numa": false
00:16:27.546  }
00:16:27.546  }
00:16:27.546  ]
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "sock",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.546  "method": "sock_set_default_impl",
00:16:27.546  "params": {
00:16:27.546  "impl_name": "posix"
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "sock_impl_set_options",
00:16:27.546  "params": {
00:16:27.546  "impl_name": "ssl",
00:16:27.546  "recv_buf_size": 4096,
00:16:27.546  "send_buf_size": 4096,
00:16:27.546  "enable_recv_pipe": true,
00:16:27.546  "enable_quickack": false,
00:16:27.546  "enable_placement_id": 0,
00:16:27.546  "enable_zerocopy_send_server": true,
00:16:27.546  "enable_zerocopy_send_client": false,
00:16:27.546  "zerocopy_threshold": 0,
00:16:27.546  "tls_version": 0,
00:16:27.546  "enable_ktls": false
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "sock_impl_set_options",
00:16:27.546  "params": {
00:16:27.546  "impl_name": "posix",
00:16:27.546  "recv_buf_size": 2097152,
00:16:27.546  "send_buf_size": 2097152,
00:16:27.546  "enable_recv_pipe": true,
00:16:27.546  "enable_quickack": false,
00:16:27.546  "enable_placement_id": 0,
00:16:27.546  "enable_zerocopy_send_server": true,
00:16:27.546  "enable_zerocopy_send_client": false,
00:16:27.546  "zerocopy_threshold": 0,
00:16:27.546  "tls_version": 0,
00:16:27.546  "enable_ktls": false
00:16:27.546  }
00:16:27.546  }
00:16:27.546  ]
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "vmd",
00:16:27.546  "config": []
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "accel",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.546  "method": "accel_set_options",
00:16:27.546  "params": {
00:16:27.546  "small_cache_size": 128,
00:16:27.546  "large_cache_size": 16,
00:16:27.546  "task_count": 2048,
00:16:27.546  "sequence_count": 2048,
00:16:27.546  "buf_count": 2048
00:16:27.546  }
00:16:27.546  }
00:16:27.546  ]
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "bdev",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.546  "method": "bdev_set_options",
00:16:27.546  "params": {
00:16:27.546  "bdev_io_pool_size": 65535,
00:16:27.546  "bdev_io_cache_size": 256,
00:16:27.546  "bdev_auto_examine": true,
00:16:27.546  "iobuf_small_cache_size": 128,
00:16:27.546  "iobuf_large_cache_size": 16
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "bdev_raid_set_options",
00:16:27.546  "params": {
00:16:27.546  "process_window_size_kb": 1024,
00:16:27.546  "process_max_bandwidth_mb_sec": 0
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "bdev_iscsi_set_options",
00:16:27.546  "params": {
00:16:27.546  "timeout_sec": 30
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "bdev_nvme_set_options",
00:16:27.546  "params": {
00:16:27.546  "action_on_timeout": "none",
00:16:27.546  "timeout_us": 0,
00:16:27.546  "timeout_admin_us": 0,
00:16:27.546  "keep_alive_timeout_ms": 10000,
00:16:27.546  "arbitration_burst": 0,
00:16:27.546  "low_priority_weight": 0,
00:16:27.546  "medium_priority_weight": 0,
00:16:27.546  "high_priority_weight": 0,
00:16:27.546  "nvme_adminq_poll_period_us": 10000,
00:16:27.546  "nvme_ioq_poll_period_us": 0,
00:16:27.546  "io_queue_requests": 0,
00:16:27.546  "delay_cmd_submit": true,
00:16:27.546  "transport_retry_count": 4,
00:16:27.546  "bdev_retry_count": 3,
00:16:27.546  "transport_ack_timeout": 0,
00:16:27.546  "ctrlr_loss_timeout_sec": 0,
00:16:27.546  "reconnect_delay_sec": 0,
00:16:27.546  "fast_io_fail_timeout_sec": 0,
00:16:27.546  "disable_auto_failback": false,
00:16:27.546  "generate_uuids": false,
00:16:27.546  "transport_tos": 0,
00:16:27.546  "nvme_error_stat": false,
00:16:27.546  "rdma_srq_size": 0,
00:16:27.546  "io_path_stat": false,
00:16:27.546  "allow_accel_sequence": false,
00:16:27.546  "rdma_max_cq_size": 0,
00:16:27.546  "rdma_cm_event_timeout_ms": 0,
00:16:27.546  "dhchap_digests": [
00:16:27.546  "sha256",
00:16:27.546  "sha384",
00:16:27.546  "sha512"
00:16:27.546  ],
00:16:27.546  "dhchap_dhgroups": [
00:16:27.546  "null",
00:16:27.546  "ffdhe2048",
00:16:27.546  "ffdhe3072",
00:16:27.546  "ffdhe4096",
00:16:27.546  "ffdhe6144",
00:16:27.546  "ffdhe8192"
00:16:27.546  ]
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "bdev_nvme_set_hotplug",
00:16:27.546  "params": {
00:16:27.546  "period_us": 100000,
00:16:27.546  "enable": false
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "bdev_malloc_create",
00:16:27.546  "params": {
00:16:27.546  "name": "malloc0",
00:16:27.546  "num_blocks": 8192,
00:16:27.546  "block_size": 4096,
00:16:27.546  "physical_block_size": 4096,
00:16:27.546  "uuid": "a1b1308f-f6e8-4d96-a224-8c5451a1c188",
00:16:27.546  "optimal_io_boundary": 0,
00:16:27.546  "md_size": 0,
00:16:27.546  "dif_type": 0,
00:16:27.546  "dif_is_head_of_md": false,
00:16:27.546  "dif_pi_format": 0
00:16:27.546  }
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "method": "bdev_wait_for_examine"
00:16:27.546  }
00:16:27.546  ]
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "scsi",
00:16:27.546  "config": null
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "scheduler",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.546  "method": "framework_set_scheduler",
00:16:27.546  "params": {
00:16:27.546  "name": "static"
00:16:27.546  }
00:16:27.546  }
00:16:27.546  ]
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "vhost_scsi",
00:16:27.546  "config": []
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "vhost_blk",
00:16:27.546  "config": []
00:16:27.546  },
00:16:27.546  {
00:16:27.546  "subsystem": "ublk",
00:16:27.546  "config": [
00:16:27.546  {
00:16:27.547  "method": "ublk_create_target",
00:16:27.547  "params": {
00:16:27.547  "cpumask": "1"
00:16:27.547  }
00:16:27.547  },
00:16:27.547  {
00:16:27.547  "method": "ublk_start_disk",
00:16:27.547  "params": {
00:16:27.547  "bdev_name": "malloc0",
00:16:27.547  "ublk_id": 0,
00:16:27.547  "num_queues": 1,
00:16:27.547  "queue_depth": 128
00:16:27.547  }
00:16:27.547  }
00:16:27.547  ]
00:16:27.547  },
00:16:27.547  {
00:16:27.547  "subsystem": "nbd",
00:16:27.547  "config": []
00:16:27.547  },
00:16:27.547  {
00:16:27.547  "subsystem": "nvmf",
00:16:27.547  "config": [
00:16:27.547  {
00:16:27.547  "method": "nvmf_set_config",
00:16:27.547  "params": {
00:16:27.547  "discovery_filter": "match_any",
00:16:27.547  "admin_cmd_passthru": {
00:16:27.547  "identify_ctrlr": false
00:16:27.547  },
00:16:27.547  "dhchap_digests": [
00:16:27.547  "sha256",
00:16:27.547  "sha384",
00:16:27.547  "sha512"
00:16:27.547  ],
00:16:27.547  "dhchap_dhgroups": [
00:16:27.547  "null",
00:16:27.547  "ffdhe2048",
00:16:27.547  "ffdhe3072",
00:16:27.547  "ffdhe4096",
00:16:27.547  "ffdhe6144",
00:16:27.547  "ffdhe8192"
00:16:27.547  ]
00:16:27.547  }
00:16:27.547  },
00:16:27.547  {
00:16:27.547  "method": "nvmf_set_max_subsystems",
00:16:27.547  "params": {
00:16:27.547  "max_subsystems": 1024
00:16:27.547  }
00:16:27.547  },
00:16:27.547  {
00:16:27.547  "method": "nvmf_set_crdt",
00:16:27.547  "params": {
00:16:27.547  "crdt1": 0,
00:16:27.547  "crdt2": 0,
00:16:27.547  "crdt3": 0
00:16:27.547  }
00:16:27.547  }
00:16:27.547  ]
00:16:27.547  },
00:16:27.547  {
00:16:27.547  "subsystem": "iscsi",
00:16:27.547  "config": [
00:16:27.547  {
00:16:27.547  "method": "iscsi_set_options",
00:16:27.547  "params": {
00:16:27.547  "node_base": "iqn.2016-06.io.spdk",
00:16:27.547  "max_sessions": 128,
00:16:27.547  "max_connections_per_session": 2,
00:16:27.547  "max_queue_depth": 64,
00:16:27.547  "default_time2wait": 2,
00:16:27.547  "default_time2retain": 20,
00:16:27.547  "first_burst_length": 8192,
00:16:27.547  "immediate_data": true,
00:16:27.547  "allow_duplicated_isid": false,
00:16:27.547  "error_recovery_level": 0,
00:16:27.547  "nop_timeout": 60,
00:16:27.547  "nop_in_interval": 30,
00:16:27.547  "disable_chap": false,
00:16:27.547  "require_chap": false,
00:16:27.547  "mutual_chap": false,
00:16:27.547  "chap_group": 0,
00:16:27.547  "max_large_datain_per_connection": 64,
00:16:27.547  "max_r2t_per_connection": 4,
00:16:27.547  "pdu_pool_size": 36864,
00:16:27.547  "immediate_data_pool_size": 16384,
00:16:27.547  "data_out_pool_size": 2048
00:16:27.547  }
00:16:27.547  }
00:16:27.547  ]
00:16:27.547  }
00:16:27.547  ]
00:16:27.547  }'
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 74911
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 74911 ']'
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 74911
00:16:27.547    17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:27.547    17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74911
00:16:27.547  killing process with pid 74911
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74911'
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 74911
00:16:27.547   17:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 74911
00:16:27.547  [2024-12-09 17:05:50.090174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:16:27.547  [2024-12-09 17:05:50.121892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:27.547  [2024-12-09 17:05:50.122008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:16:27.547  [2024-12-09 17:05:50.130877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:27.547  [2024-12-09 17:05:50.130933] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:16:27.547  [2024-12-09 17:05:50.130945] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:16:27.547  [2024-12-09 17:05:50.130970] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:16:27.547  [2024-12-09 17:05:50.131106] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=74960
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 74960
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 74960 ']'
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:28.487  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:28.487   17:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:28.487    17:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{
00:16:28.487  "subsystems": [
00:16:28.487  {
00:16:28.487  "subsystem": "fsdev",
00:16:28.487  "config": [
00:16:28.487  {
00:16:28.487  "method": "fsdev_set_opts",
00:16:28.487  "params": {
00:16:28.487  "fsdev_io_pool_size": 65535,
00:16:28.487  "fsdev_io_cache_size": 256
00:16:28.487  }
00:16:28.487  }
00:16:28.487  ]
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "subsystem": "keyring",
00:16:28.487  "config": []
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "subsystem": "iobuf",
00:16:28.487  "config": [
00:16:28.487  {
00:16:28.487  "method": "iobuf_set_options",
00:16:28.487  "params": {
00:16:28.487  "small_pool_count": 8192,
00:16:28.487  "large_pool_count": 1024,
00:16:28.487  "small_bufsize": 8192,
00:16:28.487  "large_bufsize": 135168,
00:16:28.487  "enable_numa": false
00:16:28.487  }
00:16:28.487  }
00:16:28.487  ]
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "subsystem": "sock",
00:16:28.487  "config": [
00:16:28.487  {
00:16:28.487  "method": "sock_set_default_impl",
00:16:28.487  "params": {
00:16:28.487  "impl_name": "posix"
00:16:28.487  }
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "method": "sock_impl_set_options",
00:16:28.487  "params": {
00:16:28.487  "impl_name": "ssl",
00:16:28.487  "recv_buf_size": 4096,
00:16:28.487  "send_buf_size": 4096,
00:16:28.487  "enable_recv_pipe": true,
00:16:28.487  "enable_quickack": false,
00:16:28.487  "enable_placement_id": 0,
00:16:28.487  "enable_zerocopy_send_server": true,
00:16:28.487  "enable_zerocopy_send_client": false,
00:16:28.487  "zerocopy_threshold": 0,
00:16:28.487  "tls_version": 0,
00:16:28.487  "enable_ktls": false
00:16:28.487  }
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "method": "sock_impl_set_options",
00:16:28.487  "params": {
00:16:28.487  "impl_name": "posix",
00:16:28.487  "recv_buf_size": 2097152,
00:16:28.487  "send_buf_size": 2097152,
00:16:28.487  "enable_recv_pipe": true,
00:16:28.487  "enable_quickack": false,
00:16:28.487  "enable_placement_id": 0,
00:16:28.487  "enable_zerocopy_send_server": true,
00:16:28.487  "enable_zerocopy_send_client": false,
00:16:28.487  "zerocopy_threshold": 0,
00:16:28.487  "tls_version": 0,
00:16:28.487  "enable_ktls": false
00:16:28.487  }
00:16:28.487  }
00:16:28.487  ]
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "subsystem": "vmd",
00:16:28.487  "config": []
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "subsystem": "accel",
00:16:28.487  "config": [
00:16:28.487  {
00:16:28.487  "method": "accel_set_options",
00:16:28.487  "params": {
00:16:28.487  "small_cache_size": 128,
00:16:28.487  "large_cache_size": 16,
00:16:28.487  "task_count": 2048,
00:16:28.487  "sequence_count": 2048,
00:16:28.487  "buf_count": 2048
00:16:28.487  }
00:16:28.487  }
00:16:28.487  ]
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "subsystem": "bdev",
00:16:28.487  "config": [
00:16:28.487  {
00:16:28.487  "method": "bdev_set_options",
00:16:28.487  "params": {
00:16:28.487  "bdev_io_pool_size": 65535,
00:16:28.487  "bdev_io_cache_size": 256,
00:16:28.487  "bdev_auto_examine": true,
00:16:28.487  "iobuf_small_cache_size": 128,
00:16:28.487  "iobuf_large_cache_size": 16
00:16:28.487  }
00:16:28.487  },
00:16:28.487  {
00:16:28.487  "method": "bdev_raid_set_options",
00:16:28.487  "params": {
00:16:28.488  "process_window_size_kb": 1024,
00:16:28.488  "process_max_bandwidth_mb_sec": 0
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "bdev_iscsi_set_options",
00:16:28.488  "params": {
00:16:28.488  "timeout_sec": 30
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "bdev_nvme_set_options",
00:16:28.488  "params": {
00:16:28.488  "action_on_timeout": "none",
00:16:28.488  "timeout_us": 0,
00:16:28.488  "timeout_admin_us": 0,
00:16:28.488  "keep_alive_timeout_ms": 10000,
00:16:28.488  "arbitration_burst": 0,
00:16:28.488  "low_priority_weight": 0,
00:16:28.488  "medium_priority_weight": 0,
00:16:28.488  "high_priority_weight": 0,
00:16:28.488  "nvme_adminq_poll_period_us": 10000,
00:16:28.488  "nvme_ioq_poll_period_us": 0,
00:16:28.488  "io_queue_requests": 0,
00:16:28.488  "delay_cmd_submit": true,
00:16:28.488  "transport_retry_count": 4,
00:16:28.488  "bdev_retry_count": 3,
00:16:28.488  "transport_ack_timeout": 0,
00:16:28.488  "ctrlr_loss_timeout_sec": 0,
00:16:28.488  "reconnect_delay_sec": 0,
00:16:28.488  "fast_io_fail_timeout_sec": 0,
00:16:28.488  "disable_auto_failback": false,
00:16:28.488  "generate_uuids": false,
00:16:28.488  "transport_tos": 0,
00:16:28.488  "nvme_error_stat": false,
00:16:28.488  "rdma_srq_size": 0,
00:16:28.488  "io_path_stat": false,
00:16:28.488  "allow_accel_sequence": false,
00:16:28.488  "rdma_max_cq_size": 0,
00:16:28.488  "rdma_cm_event_timeout_ms": 0,
00:16:28.488  "dhchap_digests": [
00:16:28.488  "sha256",
00:16:28.488  "sha384",
00:16:28.488  "sha512"
00:16:28.488  ],
00:16:28.488  "dhchap_dhgroups": [
00:16:28.488  "null",
00:16:28.488  "ffdhe2048",
00:16:28.488  "ffdhe3072",
00:16:28.488  "ffdhe4096",
00:16:28.488  "ffdhe6144",
00:16:28.488  "ffdhe8192"
00:16:28.488  ]
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "bdev_nvme_set_hotplug",
00:16:28.488  "params": {
00:16:28.488  "period_us": 100000,
00:16:28.488  "enable": false
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "bdev_malloc_create",
00:16:28.488  "params": {
00:16:28.488  "name": "malloc0",
00:16:28.488  "num_blocks": 8192,
00:16:28.488  "block_size": 4096,
00:16:28.488  "physical_block_size": 4096,
00:16:28.488  "uuid": "a1b1308f-f6e8-4d96-a224-8c5451a1c188",
00:16:28.488  "optimal_io_boundary": 0,
00:16:28.488  "md_size": 0,
00:16:28.488  "dif_type": 0,
00:16:28.488  "dif_is_head_of_md": false,
00:16:28.488  "dif_pi_format": 0
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "bdev_wait_for_examine"
00:16:28.488  }
00:16:28.488  ]
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "scsi",
00:16:28.488  "config": null
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "scheduler",
00:16:28.488  "config": [
00:16:28.488  {
00:16:28.488  "method": "framework_set_scheduler",
00:16:28.488  "params": {
00:16:28.488  "name": "static"
00:16:28.488  }
00:16:28.488  }
00:16:28.488  ]
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "vhost_scsi",
00:16:28.488  "config": []
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "vhost_blk",
00:16:28.488  "config": []
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "ublk",
00:16:28.488  "config": [
00:16:28.488  {
00:16:28.488  "method": "ublk_create_target",
00:16:28.488  "params": {
00:16:28.488  "cpumask": "1"
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "ublk_start_disk",
00:16:28.488  "params": {
00:16:28.488  "bdev_name": "malloc0",
00:16:28.488  "ublk_id": 0,
00:16:28.488  "num_queues": 1,
00:16:28.488  "queue_depth": 128
00:16:28.488  }
00:16:28.488  }
00:16:28.488  ]
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "nbd",
00:16:28.488  "config": []
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "nvmf",
00:16:28.488  "config": [
00:16:28.488  {
00:16:28.488  "method": "nvmf_set_config",
00:16:28.488  "params": {
00:16:28.488  "discovery_filter": "match_any",
00:16:28.488  "admin_cmd_passthru": {
00:16:28.488  "identify_ctrlr": false
00:16:28.488  },
00:16:28.488  "dhchap_digests": [
00:16:28.488  "sha256",
00:16:28.488  "sha384",
00:16:28.488  "sha512"
00:16:28.488  ],
00:16:28.488  "dhchap_dhgroups": [
00:16:28.488  "null",
00:16:28.488  "ffdhe2048",
00:16:28.488  "ffdhe3072",
00:16:28.488  "ffdhe4096",
00:16:28.488  "ffdhe6144",
00:16:28.488  "ffdhe8192"
00:16:28.488  ]
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "nvmf_set_max_subsystems",
00:16:28.488  "params": {
00:16:28.488  "max_subsystems": 1024
00:16:28.488  }
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "method": "nvmf_set_crdt",
00:16:28.488  "params": {
00:16:28.488  "crdt1": 0,
00:16:28.488  "crdt2": 0,
00:16:28.488  "crdt3": 0
00:16:28.488  }
00:16:28.488  }
00:16:28.488  ]
00:16:28.488  },
00:16:28.488  {
00:16:28.488  "subsystem": "iscsi",
00:16:28.488  "config": [
00:16:28.488  {
00:16:28.488  "method": "iscsi_set_options",
00:16:28.488  "params": {
00:16:28.488  "node_base": "iqn.2016-06.io.spdk",
00:16:28.488  "max_sessions": 128,
00:16:28.488  "max_connections_per_session": 2,
00:16:28.488  "max_queue_depth": 64,
00:16:28.488  "default_time2wait": 2,
00:16:28.488  "default_time2retain": 20,
00:16:28.488  "first_burst_length": 8192,
00:16:28.488  "immediate_data": true,
00:16:28.488  "allow_duplicated_isid": false,
00:16:28.488  "error_recovery_level": 0,
00:16:28.488  "nop_timeout": 60,
00:16:28.488  "nop_in_interval": 30,
00:16:28.488  "disable_chap": false,
00:16:28.488  "require_chap": false,
00:16:28.488  "mutual_chap": false,
00:16:28.488  "chap_group": 0,
00:16:28.488  "max_large_datain_per_connection": 64,
00:16:28.488  "max_r2t_per_connection": 4,
00:16:28.488  "pdu_pool_size": 36864,
00:16:28.488  "immediate_data_pool_size": 16384,
00:16:28.488  "data_out_pool_size": 2048
00:16:28.488  }
00:16:28.488  }
00:16:28.488  ]
00:16:28.488  }
00:16:28.488  ]
00:16:28.488  }'
00:16:28.488   17:05:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:16:28.488   17:05:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63
00:16:28.488  [2024-12-09 17:05:51.467797] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:28.488  [2024-12-09 17:05:51.467922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74960 ]
00:16:28.749  [2024-12-09 17:05:51.624190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:28.749  [2024-12-09 17:05:51.718361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:29.688  [2024-12-09 17:05:52.476866] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:16:29.688  [2024-12-09 17:05:52.477667] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:16:29.688  [2024-12-09 17:05:52.484983] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128
00:16:29.688  [2024-12-09 17:05:52.485053] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0
00:16:29.688  [2024-12-09 17:05:52.485063] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:16:29.688  [2024-12-09 17:05:52.485069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:16:29.688  [2024-12-09 17:05:52.493930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:29.688  [2024-12-09 17:05:52.493951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:29.688  [2024-12-09 17:05:52.500872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:29.688  [2024-12-09 17:05:52.500960] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:16:29.688  [2024-12-09 17:05:52.517878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0
00:16:29.688    17:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks
00:16:29.688    17:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device'
00:16:29.688    17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.688    17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:16:29.688    17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]]
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]]
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 74960
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 74960 ']'
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 74960
00:16:29.688    17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname
00:16:29.688   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:29.689    17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74960
00:16:29.689  killing process with pid 74960
00:16:29.689   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:29.689   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:29.689   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74960'
00:16:29.689   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 74960
00:16:29.689   17:05:52 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 74960
00:16:31.070  [2024-12-09 17:05:53.748453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:16:31.070  [2024-12-09 17:05:53.794882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:31.070  [2024-12-09 17:05:53.795006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:16:31.070  [2024-12-09 17:05:53.802876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:31.070  [2024-12-09 17:05:53.802922] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:16:31.070  [2024-12-09 17:05:53.802929] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:16:31.070  [2024-12-09 17:05:53.802954] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:16:31.070  [2024-12-09 17:05:53.803091] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:16:32.445   17:05:55 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT
00:16:32.445  
00:16:32.445  real	0m7.406s
00:16:32.445  user	0m5.164s
00:16:32.445  sys	0m2.845s
00:16:32.445   17:05:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:32.445   17:05:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:16:32.445  ************************************
00:16:32.445  END TEST test_save_ublk_config
00:16:32.446  ************************************
00:16:32.446   17:05:55 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75041
00:16:32.446   17:05:55 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:32.446   17:05:55 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75041
00:16:32.446   17:05:55 ublk -- common/autotest_common.sh@835 -- # '[' -z 75041 ']'
00:16:32.446   17:05:55 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:32.446   17:05:55 ublk -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:32.446  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:32.446   17:05:55 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:32.446   17:05:55 ublk -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:32.446   17:05:55 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:16:32.446   17:05:55 ublk -- common/autotest_common.sh@10 -- # set +x
00:16:32.446  [2024-12-09 17:05:55.218297] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:32.446  [2024-12-09 17:05:55.218534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75041 ]
00:16:32.446  [2024-12-09 17:05:55.375048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:32.446  [2024-12-09 17:05:55.464159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:32.446  [2024-12-09 17:05:55.464222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:33.010   17:05:55 ublk -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:33.010   17:05:55 ublk -- common/autotest_common.sh@868 -- # return 0
00:16:33.010   17:05:55 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk
00:16:33.010   17:05:55 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:33.010   17:05:55 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:33.010   17:05:55 ublk -- common/autotest_common.sh@10 -- # set +x
00:16:33.010  ************************************
00:16:33.010  START TEST test_create_ublk
00:16:33.010  ************************************
00:16:33.010   17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk
00:16:33.010    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target
00:16:33.010    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.010    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:33.010  [2024-12-09 17:05:56.018865] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:16:33.010  [2024-12-09 17:05:56.020521] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:16:33.010    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.010   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target=
00:16:33.010    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096
00:16:33.010    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.010    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.277   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0
00:16:33.277    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:33.277  [2024-12-09 17:05:56.200980] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512
00:16:33.277  [2024-12-09 17:05:56.201305] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0
00:16:33.277  [2024-12-09 17:05:56.201320] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:16:33.277  [2024-12-09 17:05:56.201326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:16:33.277  [2024-12-09 17:05:56.208878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:33.277  [2024-12-09 17:05:56.208899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:33.277  [2024-12-09 17:05:56.216868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:33.277  [2024-12-09 17:05:56.217395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:16:33.277  [2024-12-09 17:05:56.238874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.277   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0
00:16:33.277   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0
00:16:33.277    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:33.277    17:05:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.277   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[
00:16:33.277  {
00:16:33.277  "ublk_device": "/dev/ublkb0",
00:16:33.277  "id": 0,
00:16:33.277  "queue_depth": 512,
00:16:33.277  "num_queues": 4,
00:16:33.277  "bdev_name": "Malloc0"
00:16:33.277  }
00:16:33.277  ]'
00:16:33.277    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device'
00:16:33.277   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]]
00:16:33.277    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id'
00:16:33.554   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]]
00:16:33.554    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth'
00:16:33.554   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]]
00:16:33.554    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues'
00:16:33.554   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]]
00:16:33.554    17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name'
00:16:33.554   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]]
00:16:33.554   17:05:56 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10'
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10'
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template=
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]]
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0'
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0'
00:16:33.554   17:05:56 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0
00:16:33.554  fio: verification read phase will never start because write phase uses all of runtime
00:16:33.554  fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
00:16:33.554  fio-3.35
00:16:33.554  Starting 1 process
00:16:45.766  
00:16:45.766  fio_test: (groupid=0, jobs=1): err= 0: pid=75081: Mon Dec  9 17:06:06 2024
00:16:45.766    write: IOPS=15.9k, BW=62.0MiB/s (65.0MB/s)(620MiB/10001msec); 0 zone resets
00:16:45.766      clat (usec): min=40, max=10773, avg=62.16, stdev=146.11
00:16:45.766       lat (usec): min=41, max=10790, avg=62.63, stdev=146.14
00:16:45.766      clat percentiles (usec):
00:16:45.766       |  1.00th=[   47],  5.00th=[   49], 10.00th=[   50], 20.00th=[   51],
00:16:45.766       | 30.00th=[   52], 40.00th=[   53], 50.00th=[   54], 60.00th=[   55],
00:16:45.766       | 70.00th=[   57], 80.00th=[   58], 90.00th=[   63], 95.00th=[   69],
00:16:45.766       | 99.00th=[   81], 99.50th=[  188], 99.90th=[ 3228], 99.95th=[ 3523],
00:16:45.766       | 99.99th=[ 3884]
00:16:45.766     bw (  KiB/s): min=29816, max=69392, per=99.75%, avg=63340.21, stdev=10748.59, samples=19
00:16:45.766     iops        : min= 7454, max=17348, avg=15835.05, stdev=2687.15, samples=19
00:16:45.766    lat (usec)   : 50=13.29%, 100=86.08%, 250=0.30%, 500=0.07%, 750=0.01%
00:16:45.766    lat (usec)   : 1000=0.01%
00:16:45.766    lat (msec)   : 2=0.05%, 4=0.18%, 10=0.01%, 20=0.01%
00:16:45.766    cpu          : usr=2.39%, sys=15.67%, ctx=158767, majf=0, minf=796
00:16:45.766    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:16:45.766       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:45.766       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:45.766       issued rwts: total=0,158761,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:45.766       latency   : target=0, window=0, percentile=100.00%, depth=1
00:16:45.766  
00:16:45.766  Run status group 0 (all jobs):
00:16:45.766    WRITE: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=620MiB (650MB), run=10001-10001msec
00:16:45.766  
00:16:45.766  Disk stats (read/write):
00:16:45.766    ublkb0: ios=0/157000, merge=0/0, ticks=0/7909, in_queue=7910, util=99.08%
00:16:45.766   17:06:06 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.766  [2024-12-09 17:06:06.657099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:16:45.766  [2024-12-09 17:06:06.698409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:45.766  [2024-12-09 17:06:06.699159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:16:45.766  [2024-12-09 17:06:06.706873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:45.766  [2024-12-09 17:06:06.707101] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:16:45.766  [2024-12-09 17:06:06.707111] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.766   17:06:06 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:45.766    17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.766  [2024-12-09 17:06:06.725928] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0
00:16:45.766  request:
00:16:45.766  {
00:16:45.766  "ublk_id": 0,
00:16:45.766  "method": "ublk_stop_disk",
00:16:45.766  "req_id": 1
00:16:45.766  }
00:16:45.766  Got JSON-RPC error response
00:16:45.766  response:
00:16:45.766  {
00:16:45.766  "code": -19,
00:16:45.766  "message": "No such device"
00:16:45.766  }
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:45.766   17:06:06 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.766   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.766  [2024-12-09 17:06:06.738930] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:16:45.767  [2024-12-09 17:06:06.742722] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:16:45.767  [2024-12-09 17:06:06.742756] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:16:45.767   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:06 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0
00:16:45.767   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767   17:06:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767   17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices
00:16:45.767    17:06:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs
00:16:45.767    17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767    17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]'
00:16:45.767    17:06:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length
00:16:45.767   17:06:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']'
00:16:45.767    17:06:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores
00:16:45.767    17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767    17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]'
00:16:45.767    17:06:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length
00:16:45.767  ************************************
00:16:45.767  END TEST test_create_ublk
00:16:45.767  ************************************
00:16:45.767   17:06:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']'
00:16:45.767  
00:16:45.767  real	0m11.198s
00:16:45.767  user	0m0.524s
00:16:45.767  sys	0m1.657s
00:16:45.767   17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:45.767   17:06:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767   17:06:07 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk
00:16:45.767   17:06:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:45.767   17:06:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:45.767   17:06:07 ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767  ************************************
00:16:45.767  START TEST test_create_multi_ublk
00:16:45.767  ************************************
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767  [2024-12-09 17:06:07.253865] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:16:45.767  [2024-12-09 17:06:07.255535] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target=
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767  [2024-12-09 17:06:07.493984] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512
00:16:45.767  [2024-12-09 17:06:07.494326] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0
00:16:45.767  [2024-12-09 17:06:07.494337] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:16:45.767  [2024-12-09 17:06:07.494347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:16:45.767  [2024-12-09 17:06:07.517864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:45.767  [2024-12-09 17:06:07.517888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:45.767  [2024-12-09 17:06:07.529864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:45.767  [2024-12-09 17:06:07.530403] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:16:45.767  [2024-12-09 17:06:07.565872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767  [2024-12-09 17:06:07.787969] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512
00:16:45.767  [2024-12-09 17:06:07.788285] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1
00:16:45.767  [2024-12-09 17:06:07.788298] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:16:45.767  [2024-12-09 17:06:07.788303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV
00:16:45.767  [2024-12-09 17:06:07.795885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:45.767  [2024-12-09 17:06:07.795904] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:45.767  [2024-12-09 17:06:07.803871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:45.767  [2024-12-09 17:06:07.804395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV
00:16:45.767  [2024-12-09 17:06:07.820872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767  [2024-12-09 17:06:07.995956] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512
00:16:45.767  [2024-12-09 17:06:07.996270] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2
00:16:45.767  [2024-12-09 17:06:07.996282] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq
00:16:45.767  [2024-12-09 17:06:07.996288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV
00:16:45.767  [2024-12-09 17:06:08.003880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:45.767  [2024-12-09 17:06:08.003901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:45.767  [2024-12-09 17:06:08.011867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:45.767  [2024-12-09 17:06:08.012409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV
00:16:45.767  [2024-12-09 17:06:08.020883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2
00:16:45.767   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.767  [2024-12-09 17:06:08.195973] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512
00:16:45.767  [2024-12-09 17:06:08.196285] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3
00:16:45.767  [2024-12-09 17:06:08.196300] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq
00:16:45.767  [2024-12-09 17:06:08.196305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV
00:16:45.767  [2024-12-09 17:06:08.203891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:45.767  [2024-12-09 17:06:08.203909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:45.767  [2024-12-09 17:06:08.211874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:45.767  [2024-12-09 17:06:08.212397] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV
00:16:45.767  [2024-12-09 17:06:08.220898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.767   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks
00:16:45.767    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[
00:16:45.768  {
00:16:45.768  "ublk_device": "/dev/ublkb0",
00:16:45.768  "id": 0,
00:16:45.768  "queue_depth": 512,
00:16:45.768  "num_queues": 4,
00:16:45.768  "bdev_name": "Malloc0"
00:16:45.768  },
00:16:45.768  {
00:16:45.768  "ublk_device": "/dev/ublkb1",
00:16:45.768  "id": 1,
00:16:45.768  "queue_depth": 512,
00:16:45.768  "num_queues": 4,
00:16:45.768  "bdev_name": "Malloc1"
00:16:45.768  },
00:16:45.768  {
00:16:45.768  "ublk_device": "/dev/ublkb2",
00:16:45.768  "id": 2,
00:16:45.768  "queue_depth": 512,
00:16:45.768  "num_queues": 4,
00:16:45.768  "bdev_name": "Malloc2"
00:16:45.768  },
00:16:45.768  {
00:16:45.768  "ublk_device": "/dev/ublkb3",
00:16:45.768  "id": 3,
00:16:45.768  "queue_depth": 512,
00:16:45.768  "num_queues": 4,
00:16:45.768  "bdev_name": "Malloc3"
00:16:45.768  }
00:16:45.768  ]'
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]]
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]]
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]]
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id'
00:16:45.768   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]]
00:16:45.768    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth'
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:16:46.026    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues'
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:16:46.026    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name'
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]]
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]]
00:16:46.026    17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:46.026  [2024-12-09 17:06:08.899946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:16:46.026  [2024-12-09 17:06:08.939907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:46.026  [2024-12-09 17:06:08.940567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:16:46.026  [2024-12-09 17:06:08.948899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:46.026  [2024-12-09 17:06:08.949127] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:16:46.026  [2024-12-09 17:06:08.949136] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.026   17:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:46.026  [2024-12-09 17:06:08.963941] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV
00:16:46.026  [2024-12-09 17:06:08.996406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:46.026  [2024-12-09 17:06:08.997291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV
00:16:46.026  [2024-12-09 17:06:09.003869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:46.026  [2024-12-09 17:06:09.004090] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq
00:16:46.026  [2024-12-09 17:06:09.004098] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:46.026  [2024-12-09 17:06:09.017930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV
00:16:46.026  [2024-12-09 17:06:09.050900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:46.026  [2024-12-09 17:06:09.051501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV
00:16:46.026  [2024-12-09 17:06:09.062875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:46.026  [2024-12-09 17:06:09.063093] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq
00:16:46.026  [2024-12-09 17:06:09.063102] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.026   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:46.284  [2024-12-09 17:06:09.067023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV
00:16:46.284  [2024-12-09 17:06:09.109889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed
00:16:46.284  [2024-12-09 17:06:09.110462] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV
00:16:46.284  [2024-12-09 17:06:09.114135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed
00:16:46.284  [2024-12-09 17:06:09.114357] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq
00:16:46.284  [2024-12-09 17:06:09.114365] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped
00:16:46.284   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.284   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target
00:16:46.284  [2024-12-09 17:06:09.316909] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:16:46.284  [2024-12-09 17:06:09.320705] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:16:46.284  [2024-12-09 17:06:09.320733] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:16:46.542    17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3
00:16:46.542   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:46.542   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0
00:16:46.542   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.542   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:46.799   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.799   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:46.799   17:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1
00:16:46.799   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.799   17:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:47.057   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.057   17:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:47.057   17:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2
00:16:47.057   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:47.057   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:47.315   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.315   17:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:16:47.315   17:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3
00:16:47.315   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:47.315   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]'
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']'
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]'
00:16:47.574    17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length
00:16:47.574  ************************************
00:16:47.574  END TEST test_create_multi_ublk
00:16:47.574  ************************************
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']'
00:16:47.574  
00:16:47.574  real	0m3.316s
00:16:47.574  user	0m0.834s
00:16:47.574  sys	0m0.143s
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:47.574   17:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:16:47.574   17:06:10 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:16:47.574   17:06:10 ublk -- ublk/ublk.sh@147 -- # cleanup
00:16:47.574   17:06:10 ublk -- ublk/ublk.sh@130 -- # killprocess 75041
00:16:47.574   17:06:10 ublk -- common/autotest_common.sh@954 -- # '[' -z 75041 ']'
00:16:47.574   17:06:10 ublk -- common/autotest_common.sh@958 -- # kill -0 75041
00:16:47.574    17:06:10 ublk -- common/autotest_common.sh@959 -- # uname
00:16:47.574   17:06:10 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:47.574    17:06:10 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75041
00:16:47.831  killing process with pid 75041
00:16:47.831   17:06:10 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:47.832   17:06:10 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:47.832   17:06:10 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75041'
00:16:47.832   17:06:10 ublk -- common/autotest_common.sh@973 -- # kill 75041
00:16:47.832   17:06:10 ublk -- common/autotest_common.sh@978 -- # wait 75041
00:16:48.397  [2024-12-09 17:06:11.171864] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:16:48.397  [2024-12-09 17:06:11.171914] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:16:48.963  
00:16:48.963  real	0m24.320s
00:16:48.963  user	0m34.457s
00:16:48.963  sys	0m9.988s
00:16:48.963   17:06:11 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:48.963   17:06:11 ublk -- common/autotest_common.sh@10 -- # set +x
00:16:48.963  ************************************
00:16:48.963  END TEST ublk
00:16:48.963  ************************************
00:16:48.963   17:06:11  -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh
00:16:48.963   17:06:11  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:48.963   17:06:11  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:48.963   17:06:11  -- common/autotest_common.sh@10 -- # set +x
00:16:48.963  ************************************
00:16:48.963  START TEST ublk_recovery
00:16:48.963  ************************************
00:16:48.963   17:06:11 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh
00:16:48.963  * Looking for test storage...
00:16:48.963  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk
00:16:48.963    17:06:11 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:48.963     17:06:11 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:48.963     17:06:11 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version
00:16:49.221    17:06:12 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-:
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-:
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<'
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@345 -- # : 1
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@365 -- # decimal 1
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@353 -- # local d=1
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@355 -- # echo 1
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@366 -- # decimal 2
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@353 -- # local d=2
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:49.221     17:06:12 ublk_recovery -- scripts/common.sh@355 -- # echo 2
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:49.221    17:06:12 ublk_recovery -- scripts/common.sh@368 -- # return 0
00:16:49.221    17:06:12 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:49.221    17:06:12 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:49.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:49.221  		--rc genhtml_branch_coverage=1
00:16:49.221  		--rc genhtml_function_coverage=1
00:16:49.221  		--rc genhtml_legend=1
00:16:49.221  		--rc geninfo_all_blocks=1
00:16:49.221  		--rc geninfo_unexecuted_blocks=1
00:16:49.221  		
00:16:49.221  		'
00:16:49.221    17:06:12 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:49.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:49.221  		--rc genhtml_branch_coverage=1
00:16:49.221  		--rc genhtml_function_coverage=1
00:16:49.221  		--rc genhtml_legend=1
00:16:49.221  		--rc geninfo_all_blocks=1
00:16:49.221  		--rc geninfo_unexecuted_blocks=1
00:16:49.221  		
00:16:49.221  		'
00:16:49.221    17:06:12 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:49.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:49.221  		--rc genhtml_branch_coverage=1
00:16:49.221  		--rc genhtml_function_coverage=1
00:16:49.221  		--rc genhtml_legend=1
00:16:49.221  		--rc geninfo_all_blocks=1
00:16:49.221  		--rc geninfo_unexecuted_blocks=1
00:16:49.221  		
00:16:49.221  		'
00:16:49.221    17:06:12 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:49.221  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:49.221  		--rc genhtml_branch_coverage=1
00:16:49.221  		--rc genhtml_function_coverage=1
00:16:49.221  		--rc genhtml_legend=1
00:16:49.221  		--rc geninfo_all_blocks=1
00:16:49.221  		--rc geninfo_unexecuted_blocks=1
00:16:49.221  		
00:16:49.221  		'
00:16:49.221   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124
00:16:49.221    17:06:12 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424
00:16:49.221   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv
00:16:49.221  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:49.221   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75434
00:16:49.221   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:49.221   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75434
00:16:49.221   17:06:12 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75434 ']'
00:16:49.221   17:06:12 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:49.221   17:06:12 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:49.221   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:16:49.221   17:06:12 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:49.221   17:06:12 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:49.221   17:06:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:16:49.222  [2024-12-09 17:06:12.133909] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:16:49.222  [2024-12-09 17:06:12.134170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75434 ]
00:16:49.480  [2024-12-09 17:06:12.290298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:49.480  [2024-12-09 17:06:12.379757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:49.480  [2024-12-09 17:06:12.379759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@868 -- # return 0
00:16:50.046   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:16:50.046  [2024-12-09 17:06:12.962866] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:16:50.046  [2024-12-09 17:06:12.964578] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:50.046   17:06:12 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:50.046   17:06:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:16:50.046  malloc0
00:16:50.046   17:06:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:50.046   17:06:13 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128
00:16:50.046   17:06:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:50.046   17:06:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:16:50.046  [2024-12-09 17:06:13.051187] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128
00:16:50.046  [2024-12-09 17:06:13.051270] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1
00:16:50.046  [2024-12-09 17:06:13.051279] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:16:50.046  [2024-12-09 17:06:13.051285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV
00:16:50.046  [2024-12-09 17:06:13.059958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed
00:16:50.046  [2024-12-09 17:06:13.059977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS
00:16:50.046  [2024-12-09 17:06:13.066873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:16:50.046  [2024-12-09 17:06:13.066993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV
00:16:50.304  [2024-12-09 17:06:13.088874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed
00:16:50.304  1
00:16:50.304   17:06:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:50.304   17:06:13 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1
00:16:51.237   17:06:14 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75469
00:16:51.237   17:06:14 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5
00:16:51.237   17:06:14 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60
00:16:51.237  fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:16:51.237  fio-3.35
00:16:51.237  Starting 1 process
00:16:56.505   17:06:19 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75434
00:16:56.505   17:06:19 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5
00:17:01.790  /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75434 Killed                  "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk
00:17:01.790   17:06:24 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75574
00:17:01.790   17:06:24 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:01.790   17:06:24 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:17:01.790   17:06:24 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75574
00:17:01.790   17:06:24 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75574 ']'
00:17:01.790   17:06:24 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:01.790   17:06:24 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:01.790   17:06:24 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:01.790  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:01.790   17:06:24 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:01.790   17:06:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:01.790  [2024-12-09 17:06:24.186465] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:17:01.790  [2024-12-09 17:06:24.186587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75574 ]
00:17:01.790  [2024-12-09 17:06:24.343354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:17:01.790  [2024-12-09 17:06:24.438813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:01.790  [2024-12-09 17:06:24.438830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@868 -- # return 0
00:17:02.055   17:06:25 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:02.055  [2024-12-09 17:06:25.037867] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:17:02.055  [2024-12-09 17:06:25.039730] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.055   17:06:25 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.055   17:06:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:02.312  malloc0
00:17:02.312   17:06:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.312   17:06:25 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1
00:17:02.312   17:06:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:02.312   17:06:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:02.312  [2024-12-09 17:06:25.141994] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0
00:17:02.312  [2024-12-09 17:06:25.142031] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:17:02.312  [2024-12-09 17:06:25.142041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO
00:17:02.312  [2024-12-09 17:06:25.149902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed
00:17:02.312  [2024-12-09 17:06:25.149925] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1
00:17:02.312  1
00:17:02.312   17:06:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:02.312   17:06:25 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75469
00:17:03.245  [2024-12-09 17:06:26.149957] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO
00:17:03.245  [2024-12-09 17:06:26.157869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed
00:17:03.245  [2024-12-09 17:06:26.157887] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1
00:17:04.178  [2024-12-09 17:06:27.157912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO
00:17:04.179  [2024-12-09 17:06:27.161869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed
00:17:04.179  [2024-12-09 17:06:27.161885] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1
00:17:05.553  [2024-12-09 17:06:28.161906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO
00:17:05.553  [2024-12-09 17:06:28.169859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed
00:17:05.553  [2024-12-09 17:06:28.169875] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1
00:17:05.553  [2024-12-09 17:06:28.169883] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda
00:17:05.553  [2024-12-09 17:06:28.169949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY
00:17:27.475  [2024-12-09 17:06:49.542886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed
00:17:27.475  [2024-12-09 17:06:49.546691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY
00:17:27.475  [2024-12-09 17:06:49.551061] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed
00:17:27.475  [2024-12-09 17:06:49.551082] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully
00:17:54.005  
00:17:54.005  fio_test: (groupid=0, jobs=1): err= 0: pid=75472: Mon Dec  9 17:07:14 2024
00:17:54.005    read: IOPS=15.0k, BW=58.5MiB/s (61.4MB/s)(3513MiB/60002msec)
00:17:54.005      slat (nsec): min=1081, max=1312.2k, avg=4947.89, stdev=2065.04
00:17:54.005      clat (usec): min=681, max=30457k, avg=3957.44, stdev=240305.73
00:17:54.005       lat (usec): min=686, max=30457k, avg=3962.39, stdev=240305.73
00:17:54.005      clat percentiles (usec):
00:17:54.005       |  1.00th=[ 1696],  5.00th=[ 1811], 10.00th=[ 1827], 20.00th=[ 1860],
00:17:54.005       | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1958],
00:17:54.005       | 70.00th=[ 1991], 80.00th=[ 2024], 90.00th=[ 2114], 95.00th=[ 2966],
00:17:54.005       | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7439], 99.95th=[ 8225],
00:17:54.005       | 99.99th=[13173]
00:17:54.005     bw (  KiB/s): min=32448, max=129192, per=100.00%, avg=120003.22, stdev=16512.00, samples=59
00:17:54.005     iops        : min= 8112, max=32298, avg=30000.80, stdev=4128.00, samples=59
00:17:54.005    write: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(3508MiB/60002msec); 0 zone resets
00:17:54.005      slat (nsec): min=1086, max=2628.8k, avg=4981.72, stdev=3167.96
00:17:54.005      clat (usec): min=666, max=30458k, avg=4577.90, stdev=272681.08
00:17:54.005       lat (usec): min=671, max=30458k, avg=4582.88, stdev=272681.08
00:17:54.005      clat percentiles (usec):
00:17:54.005       |  1.00th=[ 1729],  5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942],
00:17:54.005       | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2024], 60.00th=[ 2057],
00:17:54.005       | 70.00th=[ 2073], 80.00th=[ 2114], 90.00th=[ 2212], 95.00th=[ 2868],
00:17:54.005       | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 7570], 99.95th=[ 8455],
00:17:54.005       | 99.99th=[13435]
00:17:54.005     bw (  KiB/s): min=32096, max=129336, per=100.00%, avg=119813.42, stdev=16664.80, samples=59
00:17:54.005     iops        : min= 8024, max=32334, avg=29953.36, stdev=4166.20, samples=59
00:17:54.005    lat (usec)   : 750=0.01%, 1000=0.01%
00:17:54.005    lat (msec)   : 2=58.02%, 4=39.37%, 10=2.57%, 20=0.03%, >=2000=0.01%
00:17:54.005    cpu          : usr=3.46%, sys=15.16%, ctx=60034, majf=0, minf=13
00:17:54.006    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:17:54.006       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:54.006       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:17:54.006       issued rwts: total=899368,898035,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:54.006       latency   : target=0, window=0, percentile=100.00%, depth=128
00:17:54.006  
00:17:54.006  Run status group 0 (all jobs):
00:17:54.006     READ: bw=58.5MiB/s (61.4MB/s), 58.5MiB/s-58.5MiB/s (61.4MB/s-61.4MB/s), io=3513MiB (3684MB), run=60002-60002msec
00:17:54.006    WRITE: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=3508MiB (3678MB), run=60002-60002msec
00:17:54.006  
00:17:54.006  Disk stats (read/write):
00:17:54.006    ublkb1: ios=895840/894567, merge=0/0, ticks=3505933/3984979, in_queue=7490912, util=99.91%
00:17:54.006   17:07:14 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:54.006  [2024-12-09 17:07:14.355458] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV
00:17:54.006  [2024-12-09 17:07:14.383926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed
00:17:54.006  [2024-12-09 17:07:14.384083] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV
00:17:54.006  [2024-12-09 17:07:14.391869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed
00:17:54.006  [2024-12-09 17:07:14.391962] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq
00:17:54.006  [2024-12-09 17:07:14.391969] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:54.006   17:07:14 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:54.006  [2024-12-09 17:07:14.405965] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:17:54.006  [2024-12-09 17:07:14.415860] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:17:54.006  [2024-12-09 17:07:14.415892] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:54.006   17:07:14 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT
00:17:54.006   17:07:14 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup
00:17:54.006   17:07:14 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75574
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75574 ']'
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75574
00:17:54.006    17:07:14 ublk_recovery -- common/autotest_common.sh@959 -- # uname
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:54.006    17:07:14 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75574
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:54.006  killing process with pid 75574
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75574'
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75574
00:17:54.006   17:07:14 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75574
00:17:54.006  [2024-12-09 17:07:15.620188] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:17:54.006  [2024-12-09 17:07:15.620252] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:17:54.006  
00:17:54.006  real	1m4.474s
00:17:54.006  user	1m46.952s
00:17:54.006  sys	0m22.454s
00:17:54.006   17:07:16 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:54.006   17:07:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:17:54.006  ************************************
00:17:54.006  END TEST ublk_recovery
00:17:54.006  ************************************
00:17:54.006   17:07:16  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:17:54.006   17:07:16  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@260 -- # timing_exit lib
00:17:54.006   17:07:16  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:54.006   17:07:16  -- common/autotest_common.sh@10 -- # set +x
00:17:54.006   17:07:16  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']'
00:17:54.006   17:07:16  -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:17:54.006   17:07:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:54.006   17:07:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:54.006   17:07:16  -- common/autotest_common.sh@10 -- # set +x
00:17:54.006  ************************************
00:17:54.006  START TEST ftl
00:17:54.006  ************************************
00:17:54.006   17:07:16 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:17:54.006  * Looking for test storage...
00:17:54.006  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:54.006     17:07:16 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:54.006     17:07:16 ftl -- common/autotest_common.sh@1711 -- # lcov --version
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:54.006    17:07:16 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:54.006    17:07:16 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:54.006    17:07:16 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:54.006    17:07:16 ftl -- scripts/common.sh@336 -- # IFS=.-:
00:17:54.006    17:07:16 ftl -- scripts/common.sh@336 -- # read -ra ver1
00:17:54.006    17:07:16 ftl -- scripts/common.sh@337 -- # IFS=.-:
00:17:54.006    17:07:16 ftl -- scripts/common.sh@337 -- # read -ra ver2
00:17:54.006    17:07:16 ftl -- scripts/common.sh@338 -- # local 'op=<'
00:17:54.006    17:07:16 ftl -- scripts/common.sh@340 -- # ver1_l=2
00:17:54.006    17:07:16 ftl -- scripts/common.sh@341 -- # ver2_l=1
00:17:54.006    17:07:16 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:54.006    17:07:16 ftl -- scripts/common.sh@344 -- # case "$op" in
00:17:54.006    17:07:16 ftl -- scripts/common.sh@345 -- # : 1
00:17:54.006    17:07:16 ftl -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:54.006    17:07:16 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:54.006     17:07:16 ftl -- scripts/common.sh@365 -- # decimal 1
00:17:54.006     17:07:16 ftl -- scripts/common.sh@353 -- # local d=1
00:17:54.006     17:07:16 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:54.006     17:07:16 ftl -- scripts/common.sh@355 -- # echo 1
00:17:54.006    17:07:16 ftl -- scripts/common.sh@365 -- # ver1[v]=1
00:17:54.006     17:07:16 ftl -- scripts/common.sh@366 -- # decimal 2
00:17:54.006     17:07:16 ftl -- scripts/common.sh@353 -- # local d=2
00:17:54.006     17:07:16 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:54.006     17:07:16 ftl -- scripts/common.sh@355 -- # echo 2
00:17:54.006    17:07:16 ftl -- scripts/common.sh@366 -- # ver2[v]=2
00:17:54.006    17:07:16 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:54.006    17:07:16 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:54.006    17:07:16 ftl -- scripts/common.sh@368 -- # return 0
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:54.006  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:54.006  		--rc genhtml_branch_coverage=1
00:17:54.006  		--rc genhtml_function_coverage=1
00:17:54.006  		--rc genhtml_legend=1
00:17:54.006  		--rc geninfo_all_blocks=1
00:17:54.006  		--rc geninfo_unexecuted_blocks=1
00:17:54.006  		
00:17:54.006  		'
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:54.006  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:54.006  		--rc genhtml_branch_coverage=1
00:17:54.006  		--rc genhtml_function_coverage=1
00:17:54.006  		--rc genhtml_legend=1
00:17:54.006  		--rc geninfo_all_blocks=1
00:17:54.006  		--rc geninfo_unexecuted_blocks=1
00:17:54.006  		
00:17:54.006  		'
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:54.006  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:54.006  		--rc genhtml_branch_coverage=1
00:17:54.006  		--rc genhtml_function_coverage=1
00:17:54.006  		--rc genhtml_legend=1
00:17:54.006  		--rc geninfo_all_blocks=1
00:17:54.006  		--rc geninfo_unexecuted_blocks=1
00:17:54.006  		
00:17:54.006  		'
00:17:54.006    17:07:16 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:54.006  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:54.006  		--rc genhtml_branch_coverage=1
00:17:54.006  		--rc genhtml_function_coverage=1
00:17:54.006  		--rc genhtml_legend=1
00:17:54.006  		--rc geninfo_all_blocks=1
00:17:54.006  		--rc geninfo_unexecuted_blocks=1
00:17:54.006  		
00:17:54.006  		'
00:17:54.006   17:07:16 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:17:54.006      17:07:16 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:17:54.006     17:07:16 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:17:54.006    17:07:16 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:17:54.006     17:07:16 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:17:54.006    17:07:16 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:17:54.006    17:07:16 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:54.006    17:07:16 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:17:54.006    17:07:16 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:17:54.006    17:07:16 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:54.006    17:07:16 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:54.006    17:07:16 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:17:54.006    17:07:16 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:17:54.006    17:07:16 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:17:54.006    17:07:16 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:17:54.006    17:07:16 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:17:54.006    17:07:16 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:17:54.006    17:07:16 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:54.006    17:07:16 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:54.007    17:07:16 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:17:54.007    17:07:16 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:17:54.007    17:07:16 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:17:54.007    17:07:16 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:17:54.007    17:07:16 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:17:54.007    17:07:16 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:17:54.007    17:07:16 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:17:54.007    17:07:16 ftl -- ftl/common.sh@23 -- # spdk_ini_pid=
00:17:54.007    17:07:16 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:54.007    17:07:16 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:54.007   17:07:16 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:54.007   17:07:16 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT
00:17:54.007   17:07:16 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED=
00:17:54.007   17:07:16 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED=
00:17:54.007   17:07:16 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE=
00:17:54.007   17:07:16 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:17:54.007  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:17:54.007  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:17:54.007  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:17:54.007  0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver
00:17:54.007  0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver
00:17:54.264   17:07:17 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76374
00:17:54.264   17:07:17 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76374
00:17:54.264   17:07:17 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc
00:17:54.264   17:07:17 ftl -- common/autotest_common.sh@835 -- # '[' -z 76374 ']'
00:17:54.264   17:07:17 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:54.264   17:07:17 ftl -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:54.264  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:54.264   17:07:17 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:54.264   17:07:17 ftl -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:54.264   17:07:17 ftl -- common/autotest_common.sh@10 -- # set +x
00:17:54.264  [2024-12-09 17:07:17.139654] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:17:54.264  [2024-12-09 17:07:17.139772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76374 ]
00:17:54.264  [2024-12-09 17:07:17.293926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:54.521  [2024-12-09 17:07:17.383087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:55.089   17:07:17 ftl -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:55.089   17:07:17 ftl -- common/autotest_common.sh@868 -- # return 0
00:17:55.089   17:07:17 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d
00:17:55.350   17:07:18 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init
00:17:55.922   17:07:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62
00:17:55.922    17:07:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720
00:17:56.491    17:07:19 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address'
00:17:56.491    17:07:19 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@50 -- # break
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']'
00:17:56.491   17:07:19 ftl -- ftl/ftl.sh@59 -- # base_size=1310720
00:17:56.491    17:07:19 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs
00:17:56.491    17:07:19 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address'
00:17:56.751   17:07:19 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0
00:17:56.751   17:07:19 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks
00:17:56.751   17:07:19 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0
00:17:56.751   17:07:19 ftl -- ftl/ftl.sh@63 -- # break
00:17:56.751   17:07:19 ftl -- ftl/ftl.sh@66 -- # killprocess 76374
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@954 -- # '[' -z 76374 ']'
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@958 -- # kill -0 76374
00:17:56.751    17:07:19 ftl -- common/autotest_common.sh@959 -- # uname
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:56.751    17:07:19 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76374
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:56.751  killing process with pid 76374
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76374'
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@973 -- # kill 76374
00:17:56.751   17:07:19 ftl -- common/autotest_common.sh@978 -- # wait 76374
00:17:58.138   17:07:20 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']'
00:17:58.138   17:07:20 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic
00:17:58.138   17:07:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:17:58.138   17:07:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:58.138   17:07:20 ftl -- common/autotest_common.sh@10 -- # set +x
00:17:58.138  ************************************
00:17:58.138  START TEST ftl_fio_basic
00:17:58.138  ************************************
00:17:58.138   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic
00:17:58.138  * Looking for test storage...
00:17:58.138  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-:
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-:
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<'
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1
00:17:58.138    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:58.138     17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:58.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.139  		--rc genhtml_branch_coverage=1
00:17:58.139  		--rc genhtml_function_coverage=1
00:17:58.139  		--rc genhtml_legend=1
00:17:58.139  		--rc geninfo_all_blocks=1
00:17:58.139  		--rc geninfo_unexecuted_blocks=1
00:17:58.139  		
00:17:58.139  		'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:58.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.139  		--rc genhtml_branch_coverage=1
00:17:58.139  		--rc genhtml_function_coverage=1
00:17:58.139  		--rc genhtml_legend=1
00:17:58.139  		--rc geninfo_all_blocks=1
00:17:58.139  		--rc geninfo_unexecuted_blocks=1
00:17:58.139  		
00:17:58.139  		'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:58.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.139  		--rc genhtml_branch_coverage=1
00:17:58.139  		--rc genhtml_function_coverage=1
00:17:58.139  		--rc genhtml_legend=1
00:17:58.139  		--rc geninfo_all_blocks=1
00:17:58.139  		--rc geninfo_unexecuted_blocks=1
00:17:58.139  		
00:17:58.139  		'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:58.139  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:58.139  		--rc genhtml_branch_coverage=1
00:17:58.139  		--rc genhtml_function_coverage=1
00:17:58.139  		--rc genhtml_legend=1
00:17:58.139  		--rc geninfo_all_blocks=1
00:17:58.139  		--rc geninfo_unexecuted_blocks=1
00:17:58.139  		
00:17:58.139  		'
00:17:58.139   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:17:58.139      17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh
00:17:58.139     17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:17:58.139     17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:17:58.139    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid=
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:58.401    17:07:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128'
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap'
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght'
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128'
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid=
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]]
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']'
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76506
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76506
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76506 ']'
00:17:58.401  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:58.401   17:07:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:17:58.401  [2024-12-09 17:07:21.260442] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:17:58.401  [2024-12-09 17:07:21.260573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ]
00:17:58.401  [2024-12-09 17:07:21.415245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:17:58.663  [2024-12-09 17:07:21.505897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:58.663  [2024-12-09 17:07:21.506747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:17:58.663  [2024-12-09 17:07:21.506801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:59.234   17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:59.234   17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0
00:17:59.234    17:07:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:17:59.234    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0
00:17:59.234    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:17:59.234    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424
00:17:59.234    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev
00:17:59.234     17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:17:59.492    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:17:59.492    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size
00:17:59.492     17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:17:59.492     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:17:59.492     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:17:59.492     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:17:59.492     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:17:59.492      17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:17:59.751    {
00:17:59.751      "name": "nvme0n1",
00:17:59.751      "aliases": [
00:17:59.751        "32effcf5-6164-4625-919d-39d037e81968"
00:17:59.751      ],
00:17:59.751      "product_name": "NVMe disk",
00:17:59.751      "block_size": 4096,
00:17:59.751      "num_blocks": 1310720,
00:17:59.751      "uuid": "32effcf5-6164-4625-919d-39d037e81968",
00:17:59.751      "numa_id": -1,
00:17:59.751      "assigned_rate_limits": {
00:17:59.751        "rw_ios_per_sec": 0,
00:17:59.751        "rw_mbytes_per_sec": 0,
00:17:59.751        "r_mbytes_per_sec": 0,
00:17:59.751        "w_mbytes_per_sec": 0
00:17:59.751      },
00:17:59.751      "claimed": false,
00:17:59.751      "zoned": false,
00:17:59.751      "supported_io_types": {
00:17:59.751        "read": true,
00:17:59.751        "write": true,
00:17:59.751        "unmap": true,
00:17:59.751        "flush": true,
00:17:59.751        "reset": true,
00:17:59.751        "nvme_admin": true,
00:17:59.751        "nvme_io": true,
00:17:59.751        "nvme_io_md": false,
00:17:59.751        "write_zeroes": true,
00:17:59.751        "zcopy": false,
00:17:59.751        "get_zone_info": false,
00:17:59.751        "zone_management": false,
00:17:59.751        "zone_append": false,
00:17:59.751        "compare": true,
00:17:59.751        "compare_and_write": false,
00:17:59.751        "abort": true,
00:17:59.751        "seek_hole": false,
00:17:59.751        "seek_data": false,
00:17:59.751        "copy": true,
00:17:59.751        "nvme_iov_md": false
00:17:59.751      },
00:17:59.751      "driver_specific": {
00:17:59.751        "nvme": [
00:17:59.751          {
00:17:59.751            "pci_address": "0000:00:11.0",
00:17:59.751            "trid": {
00:17:59.751              "trtype": "PCIe",
00:17:59.751              "traddr": "0000:00:11.0"
00:17:59.751            },
00:17:59.751            "ctrlr_data": {
00:17:59.751              "cntlid": 0,
00:17:59.751              "vendor_id": "0x1b36",
00:17:59.751              "model_number": "QEMU NVMe Ctrl",
00:17:59.751              "serial_number": "12341",
00:17:59.751              "firmware_revision": "8.0.0",
00:17:59.751              "subnqn": "nqn.2019-08.org.qemu:12341",
00:17:59.751              "oacs": {
00:17:59.751                "security": 0,
00:17:59.751                "format": 1,
00:17:59.751                "firmware": 0,
00:17:59.751                "ns_manage": 1
00:17:59.751              },
00:17:59.751              "multi_ctrlr": false,
00:17:59.751              "ana_reporting": false
00:17:59.751            },
00:17:59.751            "vs": {
00:17:59.751              "nvme_version": "1.4"
00:17:59.751            },
00:17:59.751            "ns_data": {
00:17:59.751              "id": 1,
00:17:59.751              "can_share": false
00:17:59.751            }
00:17:59.751          }
00:17:59.751        ],
00:17:59.751        "mp_policy": "active_passive"
00:17:59.751      }
00:17:59.751    }
00:17:59.751  ]'
00:17:59.751      17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:17:59.751      17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120
00:17:59.751    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120
00:17:59.751    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:17:59.751    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:17:59.751     17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:18:00.009    17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores=
00:18:00.009     17:07:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:18:00.009    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7a5e2e16-26d6-4a02-912a-ebead97d8fab
00:18:00.009    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7a5e2e16-26d6-4a02-912a-ebead97d8fab
00:18:00.267   17:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.267    17:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.267    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0
00:18:00.267    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:18:00.267    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.267    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size=
00:18:00.267     17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.267     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.267     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:18:00.267     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:18:00.267     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:18:00.267      17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.525     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:18:00.525    {
00:18:00.525      "name": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:00.525      "aliases": [
00:18:00.525        "lvs/nvme0n1p0"
00:18:00.525      ],
00:18:00.525      "product_name": "Logical Volume",
00:18:00.525      "block_size": 4096,
00:18:00.525      "num_blocks": 26476544,
00:18:00.525      "uuid": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:00.525      "assigned_rate_limits": {
00:18:00.525        "rw_ios_per_sec": 0,
00:18:00.525        "rw_mbytes_per_sec": 0,
00:18:00.525        "r_mbytes_per_sec": 0,
00:18:00.525        "w_mbytes_per_sec": 0
00:18:00.525      },
00:18:00.525      "claimed": false,
00:18:00.525      "zoned": false,
00:18:00.525      "supported_io_types": {
00:18:00.525        "read": true,
00:18:00.525        "write": true,
00:18:00.525        "unmap": true,
00:18:00.525        "flush": false,
00:18:00.525        "reset": true,
00:18:00.525        "nvme_admin": false,
00:18:00.525        "nvme_io": false,
00:18:00.525        "nvme_io_md": false,
00:18:00.525        "write_zeroes": true,
00:18:00.525        "zcopy": false,
00:18:00.525        "get_zone_info": false,
00:18:00.525        "zone_management": false,
00:18:00.525        "zone_append": false,
00:18:00.525        "compare": false,
00:18:00.525        "compare_and_write": false,
00:18:00.525        "abort": false,
00:18:00.525        "seek_hole": true,
00:18:00.525        "seek_data": true,
00:18:00.525        "copy": false,
00:18:00.525        "nvme_iov_md": false
00:18:00.525      },
00:18:00.525      "driver_specific": {
00:18:00.525        "lvol": {
00:18:00.525          "lvol_store_uuid": "7a5e2e16-26d6-4a02-912a-ebead97d8fab",
00:18:00.525          "base_bdev": "nvme0n1",
00:18:00.525          "thin_provision": true,
00:18:00.525          "num_allocated_clusters": 0,
00:18:00.525          "snapshot": false,
00:18:00.525          "clone": false,
00:18:00.525          "esnap_clone": false
00:18:00.525        }
00:18:00.525      }
00:18:00.525    }
00:18:00.525  ]'
00:18:00.525      17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:18:00.525     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:18:00.525      17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:18:00.525     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:18:00.525     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:18:00.525     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:18:00.525    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171
00:18:00.525    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev
00:18:00.525     17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:18:00.783    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:18:00.783    17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]]
00:18:00.783     17:07:23 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.783     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:00.783     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:18:00.783     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:18:00.783     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:18:00.783      17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:01.041     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:18:01.041    {
00:18:01.041      "name": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:01.041      "aliases": [
00:18:01.041        "lvs/nvme0n1p0"
00:18:01.041      ],
00:18:01.041      "product_name": "Logical Volume",
00:18:01.041      "block_size": 4096,
00:18:01.041      "num_blocks": 26476544,
00:18:01.041      "uuid": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:01.041      "assigned_rate_limits": {
00:18:01.041        "rw_ios_per_sec": 0,
00:18:01.041        "rw_mbytes_per_sec": 0,
00:18:01.041        "r_mbytes_per_sec": 0,
00:18:01.041        "w_mbytes_per_sec": 0
00:18:01.041      },
00:18:01.041      "claimed": false,
00:18:01.041      "zoned": false,
00:18:01.041      "supported_io_types": {
00:18:01.041        "read": true,
00:18:01.041        "write": true,
00:18:01.041        "unmap": true,
00:18:01.041        "flush": false,
00:18:01.041        "reset": true,
00:18:01.041        "nvme_admin": false,
00:18:01.041        "nvme_io": false,
00:18:01.041        "nvme_io_md": false,
00:18:01.041        "write_zeroes": true,
00:18:01.041        "zcopy": false,
00:18:01.041        "get_zone_info": false,
00:18:01.041        "zone_management": false,
00:18:01.041        "zone_append": false,
00:18:01.041        "compare": false,
00:18:01.041        "compare_and_write": false,
00:18:01.041        "abort": false,
00:18:01.041        "seek_hole": true,
00:18:01.041        "seek_data": true,
00:18:01.041        "copy": false,
00:18:01.041        "nvme_iov_md": false
00:18:01.041      },
00:18:01.041      "driver_specific": {
00:18:01.041        "lvol": {
00:18:01.041          "lvol_store_uuid": "7a5e2e16-26d6-4a02-912a-ebead97d8fab",
00:18:01.041          "base_bdev": "nvme0n1",
00:18:01.041          "thin_provision": true,
00:18:01.041          "num_allocated_clusters": 0,
00:18:01.041          "snapshot": false,
00:18:01.041          "clone": false,
00:18:01.041          "esnap_clone": false
00:18:01.041        }
00:18:01.041      }
00:18:01.041    }
00:18:01.041  ]'
00:18:01.041      17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:18:01.041     17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:18:01.041      17:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:18:01.041     17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:18:01.041     17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:18:01.041     17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:18:01.041    17:07:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171
00:18:01.041    17:07:24 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:18:01.300   17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0
00:18:01.300   17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60
00:18:01.300   17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']'
00:18:01.300  /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected
00:18:01.300    17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:01.300    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:01.300    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:18:01.300    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:18:01.300    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:18:01.300     17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df1cf27a-f161-4e9d-8da8-df73dd2f484d
00:18:01.559    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:18:01.559    {
00:18:01.559      "name": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:01.559      "aliases": [
00:18:01.559        "lvs/nvme0n1p0"
00:18:01.559      ],
00:18:01.559      "product_name": "Logical Volume",
00:18:01.559      "block_size": 4096,
00:18:01.559      "num_blocks": 26476544,
00:18:01.559      "uuid": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:01.559      "assigned_rate_limits": {
00:18:01.559        "rw_ios_per_sec": 0,
00:18:01.559        "rw_mbytes_per_sec": 0,
00:18:01.559        "r_mbytes_per_sec": 0,
00:18:01.559        "w_mbytes_per_sec": 0
00:18:01.559      },
00:18:01.559      "claimed": false,
00:18:01.559      "zoned": false,
00:18:01.559      "supported_io_types": {
00:18:01.559        "read": true,
00:18:01.559        "write": true,
00:18:01.559        "unmap": true,
00:18:01.559        "flush": false,
00:18:01.559        "reset": true,
00:18:01.559        "nvme_admin": false,
00:18:01.559        "nvme_io": false,
00:18:01.559        "nvme_io_md": false,
00:18:01.559        "write_zeroes": true,
00:18:01.559        "zcopy": false,
00:18:01.559        "get_zone_info": false,
00:18:01.559        "zone_management": false,
00:18:01.559        "zone_append": false,
00:18:01.559        "compare": false,
00:18:01.559        "compare_and_write": false,
00:18:01.559        "abort": false,
00:18:01.559        "seek_hole": true,
00:18:01.559        "seek_data": true,
00:18:01.559        "copy": false,
00:18:01.559        "nvme_iov_md": false
00:18:01.559      },
00:18:01.559      "driver_specific": {
00:18:01.559        "lvol": {
00:18:01.559          "lvol_store_uuid": "7a5e2e16-26d6-4a02-912a-ebead97d8fab",
00:18:01.559          "base_bdev": "nvme0n1",
00:18:01.559          "thin_provision": true,
00:18:01.559          "num_allocated_clusters": 0,
00:18:01.559          "snapshot": false,
00:18:01.559          "clone": false,
00:18:01.559          "esnap_clone": false
00:18:01.559        }
00:18:01.559      }
00:18:01.559    }
00:18:01.559  ]'
00:18:01.559     17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:18:01.559    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:18:01.559     17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:18:01.559    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:18:01.559    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:18:01.559    17:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:18:01.559   17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60
00:18:01.559   17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']'
00:18:01.559   17:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d df1cf27a-f161-4e9d-8da8-df73dd2f484d -c nvc0n1p0 --l2p_dram_limit 60
00:18:01.818  [2024-12-09 17:07:24.656920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.656961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:18:01.818  [2024-12-09 17:07:24.656975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:18:01.818  [2024-12-09 17:07:24.656982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.657029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.657041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:18:01.818  [2024-12-09 17:07:24.657049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.029 ms
00:18:01.818  [2024-12-09 17:07:24.657055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.657083] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:18:01.818  [2024-12-09 17:07:24.657665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:18:01.818  [2024-12-09 17:07:24.657688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.657695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:18:01.818  [2024-12-09 17:07:24.657704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.615 ms
00:18:01.818  [2024-12-09 17:07:24.657711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.657764] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 599d7cb5-35c6-4a0e-a939-841b79da8dc3
00:18:01.818  [2024-12-09 17:07:24.659057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.659085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:18:01.818  [2024-12-09 17:07:24.659095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:18:01.818  [2024-12-09 17:07:24.659104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.665878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.665905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:18:01.818  [2024-12-09 17:07:24.665914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.727 ms
00:18:01.818  [2024-12-09 17:07:24.665922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.666004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.666014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:18:01.818  [2024-12-09 17:07:24.666022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.060 ms
00:18:01.818  [2024-12-09 17:07:24.666032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.666076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.666089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:18:01.818  [2024-12-09 17:07:24.666096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:18:01.818  [2024-12-09 17:07:24.666104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.666123] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:18:01.818  [2024-12-09 17:07:24.669374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.669397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:18:01.818  [2024-12-09 17:07:24.669408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.252 ms
00:18:01.818  [2024-12-09 17:07:24.669417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.669450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.669458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:18:01.818  [2024-12-09 17:07:24.669467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:18:01.818  [2024-12-09 17:07:24.669473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.669492] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:18:01.818  [2024-12-09 17:07:24.669614] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:18:01.818  [2024-12-09 17:07:24.669629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:18:01.818  [2024-12-09 17:07:24.669637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:18:01.818  [2024-12-09 17:07:24.669647] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:18:01.818  [2024-12-09 17:07:24.669655] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:18:01.818  [2024-12-09 17:07:24.669664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:18:01.818  [2024-12-09 17:07:24.669671] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:18:01.818  [2024-12-09 17:07:24.669678] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:18:01.818  [2024-12-09 17:07:24.669684] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:18:01.818  [2024-12-09 17:07:24.669691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.669699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:18:01.818  [2024-12-09 17:07:24.669707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.201 ms
00:18:01.818  [2024-12-09 17:07:24.669712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.669782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.818  [2024-12-09 17:07:24.669789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:18:01.818  [2024-12-09 17:07:24.669798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:18:01.818  [2024-12-09 17:07:24.669803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.818  [2024-12-09 17:07:24.669902] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:18:01.818  [2024-12-09 17:07:24.669911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:18:01.818  [2024-12-09 17:07:24.669921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:18:01.818  [2024-12-09 17:07:24.669927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:18:01.818  [2024-12-09 17:07:24.669935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:18:01.818  [2024-12-09 17:07:24.669940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:18:01.818  [2024-12-09 17:07:24.669947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:18:01.818  [2024-12-09 17:07:24.669952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:18:01.819  [2024-12-09 17:07:24.669960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:18:01.819  [2024-12-09 17:07:24.669965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:18:01.819  [2024-12-09 17:07:24.669971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:18:01.819  [2024-12-09 17:07:24.669976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:18:01.819  [2024-12-09 17:07:24.669983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:18:01.819  [2024-12-09 17:07:24.669988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:18:01.819  [2024-12-09 17:07:24.669994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:18:01.819  [2024-12-09 17:07:24.669999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:18:01.819  [2024-12-09 17:07:24.670017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:18:01.819  [2024-12-09 17:07:24.670036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:18:01.819  [2024-12-09 17:07:24.670053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:18:01.819  [2024-12-09 17:07:24.670071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:18:01.819  [2024-12-09 17:07:24.670088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:18:01.819  [2024-12-09 17:07:24.670107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:18:01.819  [2024-12-09 17:07:24.670129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:18:01.819  [2024-12-09 17:07:24.670134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:18:01.819  [2024-12-09 17:07:24.670141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:18:01.819  [2024-12-09 17:07:24.670146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:18:01.819  [2024-12-09 17:07:24.670152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:18:01.819  [2024-12-09 17:07:24.670157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:18:01.819  [2024-12-09 17:07:24.670168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:18:01.819  [2024-12-09 17:07:24.670175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670179] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:18:01.819  [2024-12-09 17:07:24.670187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:18:01.819  [2024-12-09 17:07:24.670193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:18:01.819  [2024-12-09 17:07:24.670206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:18:01.819  [2024-12-09 17:07:24.670215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:18:01.819  [2024-12-09 17:07:24.670221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:18:01.819  [2024-12-09 17:07:24.670228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:18:01.819  [2024-12-09 17:07:24.670233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:18:01.819  [2024-12-09 17:07:24.670239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:18:01.819  [2024-12-09 17:07:24.670246] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:18:01.819  [2024-12-09 17:07:24.670255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:18:01.819  [2024-12-09 17:07:24.670269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:18:01.819  [2024-12-09 17:07:24.670274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:18:01.819  [2024-12-09 17:07:24.670281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:18:01.819  [2024-12-09 17:07:24.670287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:18:01.819  [2024-12-09 17:07:24.670295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:18:01.819  [2024-12-09 17:07:24.670301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:18:01.819  [2024-12-09 17:07:24.670308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:18:01.819  [2024-12-09 17:07:24.670313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:18:01.819  [2024-12-09 17:07:24.670322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:18:01.819  [2024-12-09 17:07:24.670353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:18:01.819  [2024-12-09 17:07:24.670361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:18:01.819  [2024-12-09 17:07:24.670375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:18:01.819  [2024-12-09 17:07:24.670380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:18:01.819  [2024-12-09 17:07:24.670387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:18:01.819  [2024-12-09 17:07:24.670392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:01.819  [2024-12-09 17:07:24.670399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:18:01.819  [2024-12-09 17:07:24.670405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.554 ms
00:18:01.819  [2024-12-09 17:07:24.670413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:01.819  [2024-12-09 17:07:24.670467] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:18:01.819  [2024-12-09 17:07:24.670481] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:18:06.001  [2024-12-09 17:07:28.293380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.293452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:18:06.001  [2024-12-09 17:07:28.293468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3622.898 ms
00:18:06.001  [2024-12-09 17:07:28.293478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.321294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.321340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:18:06.001  [2024-12-09 17:07:28.321353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.612 ms
00:18:06.001  [2024-12-09 17:07:28.321363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.321492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.321506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:18:06.001  [2024-12-09 17:07:28.321514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.061 ms
00:18:06.001  [2024-12-09 17:07:28.321526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.369748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.369791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:18:06.001  [2024-12-09 17:07:28.369807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 48.185 ms
00:18:06.001  [2024-12-09 17:07:28.369819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.369870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.369883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:18:06.001  [2024-12-09 17:07:28.369892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:18:06.001  [2024-12-09 17:07:28.369902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.370360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.370390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:18:06.001  [2024-12-09 17:07:28.370400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.378 ms
00:18:06.001  [2024-12-09 17:07:28.370413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.370540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.370560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:18:06.001  [2024-12-09 17:07:28.370569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.103 ms
00:18:06.001  [2024-12-09 17:07:28.370581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.386506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.386537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:18:06.001  [2024-12-09 17:07:28.386548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.900 ms
00:18:06.001  [2024-12-09 17:07:28.386559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.001  [2024-12-09 17:07:28.398743] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:18:06.001  [2024-12-09 17:07:28.415802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.001  [2024-12-09 17:07:28.415834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:18:06.002  [2024-12-09 17:07:28.415863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.157 ms
00:18:06.002  [2024-12-09 17:07:28.415872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.475288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.475342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:18:06.002  [2024-12-09 17:07:28.475360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 59.378 ms
00:18:06.002  [2024-12-09 17:07:28.475370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.475561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.475577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:18:06.002  [2024-12-09 17:07:28.475591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.147 ms
00:18:06.002  [2024-12-09 17:07:28.475600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.498589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.498622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:18:06.002  [2024-12-09 17:07:28.498637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.926 ms
00:18:06.002  [2024-12-09 17:07:28.498646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.520893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.520927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:18:06.002  [2024-12-09 17:07:28.520940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.206 ms
00:18:06.002  [2024-12-09 17:07:28.520948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.521528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.521547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:18:06.002  [2024-12-09 17:07:28.521558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.544 ms
00:18:06.002  [2024-12-09 17:07:28.521566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.591474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.591507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:18:06.002  [2024-12-09 17:07:28.591524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 69.871 ms
00:18:06.002  [2024-12-09 17:07:28.591535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.615756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.615788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:18:06.002  [2024-12-09 17:07:28.615801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.143 ms
00:18:06.002  [2024-12-09 17:07:28.615810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.638368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.638396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:18:06.002  [2024-12-09 17:07:28.638409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.507 ms
00:18:06.002  [2024-12-09 17:07:28.638418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.661390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.661420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:18:06.002  [2024-12-09 17:07:28.661433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.934 ms
00:18:06.002  [2024-12-09 17:07:28.661441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.661484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.661494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:18:06.002  [2024-12-09 17:07:28.661510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:18:06.002  [2024-12-09 17:07:28.661517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.661600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.002  [2024-12-09 17:07:28.661612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:18:06.002  [2024-12-09 17:07:28.661622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.035 ms
00:18:06.002  [2024-12-09 17:07:28.661630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.002  [2024-12-09 17:07:28.662642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4005.258 ms, result 0
00:18:06.002  {
00:18:06.002    "name": "ftl0",
00:18:06.002    "uuid": "599d7cb5-35c6-4a0e-a939-841b79da8dc3"
00:18:06.002  }
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:18:06.002   17:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000
00:18:06.261  [
00:18:06.261    {
00:18:06.261      "name": "ftl0",
00:18:06.261      "aliases": [
00:18:06.261        "599d7cb5-35c6-4a0e-a939-841b79da8dc3"
00:18:06.261      ],
00:18:06.261      "product_name": "FTL disk",
00:18:06.261      "block_size": 4096,
00:18:06.261      "num_blocks": 20971520,
00:18:06.261      "uuid": "599d7cb5-35c6-4a0e-a939-841b79da8dc3",
00:18:06.261      "assigned_rate_limits": {
00:18:06.261        "rw_ios_per_sec": 0,
00:18:06.261        "rw_mbytes_per_sec": 0,
00:18:06.261        "r_mbytes_per_sec": 0,
00:18:06.261        "w_mbytes_per_sec": 0
00:18:06.261      },
00:18:06.261      "claimed": false,
00:18:06.261      "zoned": false,
00:18:06.261      "supported_io_types": {
00:18:06.261        "read": true,
00:18:06.261        "write": true,
00:18:06.261        "unmap": true,
00:18:06.261        "flush": true,
00:18:06.261        "reset": false,
00:18:06.261        "nvme_admin": false,
00:18:06.261        "nvme_io": false,
00:18:06.261        "nvme_io_md": false,
00:18:06.261        "write_zeroes": true,
00:18:06.261        "zcopy": false,
00:18:06.261        "get_zone_info": false,
00:18:06.261        "zone_management": false,
00:18:06.261        "zone_append": false,
00:18:06.261        "compare": false,
00:18:06.261        "compare_and_write": false,
00:18:06.261        "abort": false,
00:18:06.261        "seek_hole": false,
00:18:06.261        "seek_data": false,
00:18:06.261        "copy": false,
00:18:06.261        "nvme_iov_md": false
00:18:06.261      },
00:18:06.261      "driver_specific": {
00:18:06.261        "ftl": {
00:18:06.261          "base_bdev": "df1cf27a-f161-4e9d-8da8-df73dd2f484d",
00:18:06.261          "cache": "nvc0n1p0"
00:18:06.261        }
00:18:06.261      }
00:18:06.261    }
00:18:06.261  ]
00:18:06.261   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0
00:18:06.261   17:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": ['
00:18:06.261   17:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:18:06.261   17:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}'
00:18:06.261   17:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:18:06.519  [2024-12-09 17:07:29.479157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.519  [2024-12-09 17:07:29.479196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:18:06.519  [2024-12-09 17:07:29.479208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:18:06.519  [2024-12-09 17:07:29.479217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.519  [2024-12-09 17:07:29.479243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:18:06.519  [2024-12-09 17:07:29.481442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.519  [2024-12-09 17:07:29.481584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:18:06.519  [2024-12-09 17:07:29.481603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.184 ms
00:18:06.519  [2024-12-09 17:07:29.481610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.519  [2024-12-09 17:07:29.481891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.519  [2024-12-09 17:07:29.481903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:18:06.519  [2024-12-09 17:07:29.481912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.251 ms
00:18:06.519  [2024-12-09 17:07:29.481918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.519  [2024-12-09 17:07:29.484335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.519  [2024-12-09 17:07:29.484348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:18:06.519  [2024-12-09 17:07:29.484358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.402 ms
00:18:06.519  [2024-12-09 17:07:29.484364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.519  [2024-12-09 17:07:29.489192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.519  [2024-12-09 17:07:29.489273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:18:06.519  [2024-12-09 17:07:29.489324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.808 ms
00:18:06.519  [2024-12-09 17:07:29.489343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.519  [2024-12-09 17:07:29.507813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.520  [2024-12-09 17:07:29.507920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:18:06.520  [2024-12-09 17:07:29.508008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.395 ms
00:18:06.520  [2024-12-09 17:07:29.508026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.520  [2024-12-09 17:07:29.520530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.520  [2024-12-09 17:07:29.520630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:18:06.520  [2024-12-09 17:07:29.520690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.436 ms
00:18:06.520  [2024-12-09 17:07:29.520709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.520  [2024-12-09 17:07:29.520886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.520  [2024-12-09 17:07:29.520949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:18:06.520  [2024-12-09 17:07:29.520996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.097 ms
00:18:06.520  [2024-12-09 17:07:29.521014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.520  [2024-12-09 17:07:29.538713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.520  [2024-12-09 17:07:29.538797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:18:06.520  [2024-12-09 17:07:29.538840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.667 ms
00:18:06.520  [2024-12-09 17:07:29.538870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.520  [2024-12-09 17:07:29.555948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.520  [2024-12-09 17:07:29.556033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:18:06.520  [2024-12-09 17:07:29.556077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.039 ms
00:18:06.520  [2024-12-09 17:07:29.556110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.778  [2024-12-09 17:07:29.572804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.778  [2024-12-09 17:07:29.572898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:18:06.778  [2024-12-09 17:07:29.572943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.650 ms
00:18:06.778  [2024-12-09 17:07:29.572960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.778  [2024-12-09 17:07:29.590191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.778  [2024-12-09 17:07:29.590277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:18:06.778  [2024-12-09 17:07:29.590321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.147 ms
00:18:06.778  [2024-12-09 17:07:29.590353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.778  [2024-12-09 17:07:29.590393] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:18:06.778  [2024-12-09 17:07:29.590430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.778  [2024-12-09 17:07:29.590460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.778  [2024-12-09 17:07:29.590483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.778  [2024-12-09 17:07:29.590507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.590980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.591995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.592964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.779  [2024-12-09 17:07:29.593601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:18:06.780  [2024-12-09 17:07:29.593748] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:18:06.780  [2024-12-09 17:07:29.593756] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         599d7cb5-35c6-4a0e-a939-841b79da8dc3
00:18:06.780  [2024-12-09 17:07:29.593763] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:18:06.780  [2024-12-09 17:07:29.593772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:18:06.780  [2024-12-09 17:07:29.593778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:18:06.780  [2024-12-09 17:07:29.593789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:18:06.780  [2024-12-09 17:07:29.593795] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:18:06.780  [2024-12-09 17:07:29.593802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:18:06.780  [2024-12-09 17:07:29.593808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:18:06.780  [2024-12-09 17:07:29.593814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:18:06.780  [2024-12-09 17:07:29.593819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:18:06.780  [2024-12-09 17:07:29.593827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.780  [2024-12-09 17:07:29.593833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:18:06.780  [2024-12-09 17:07:29.593841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.436 ms
00:18:06.780  [2024-12-09 17:07:29.593861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.603926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.780  [2024-12-09 17:07:29.604021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:18:06.780  [2024-12-09 17:07:29.604035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.027 ms
00:18:06.780  [2024-12-09 17:07:29.604041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.604348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:18:06.780  [2024-12-09 17:07:29.604357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:18:06.780  [2024-12-09 17:07:29.604366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.271 ms
00:18:06.780  [2024-12-09 17:07:29.604372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.640296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.640328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:18:06.780  [2024-12-09 17:07:29.640338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.640345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.640400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.640406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:18:06.780  [2024-12-09 17:07:29.640415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.640421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.640492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.640504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:18:06.780  [2024-12-09 17:07:29.640512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.640518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.640548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.640555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:18:06.780  [2024-12-09 17:07:29.640563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.640569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.707226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.707269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:18:06.780  [2024-12-09 17:07:29.707280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.707287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.758491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.758673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:18:06.780  [2024-12-09 17:07:29.758689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.758696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.758796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.758806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:18:06.780  [2024-12-09 17:07:29.758818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.758824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.758890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.758899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:18:06.780  [2024-12-09 17:07:29.758907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.758914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.759009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.759020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:18:06.780  [2024-12-09 17:07:29.759028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.759036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.759081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.759091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:18:06.780  [2024-12-09 17:07:29.759099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.759105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.759144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.759152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:18:06.780  [2024-12-09 17:07:29.759162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.759170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.759216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:18:06.780  [2024-12-09 17:07:29.759225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:18:06.780  [2024-12-09 17:07:29.759233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:18:06.780  [2024-12-09 17:07:29.759239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:18:06.780  [2024-12-09 17:07:29.759375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.189 ms, result 0
00:18:06.780  true
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76506
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76506 ']'
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76506
00:18:06.780    17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:06.780    17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76506
00:18:06.780  killing process with pid 76506
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76506'
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76506
00:18:06.780   17:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76506
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:10.964    17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:10.964    17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:10.964    17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:10.964   17:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:18:11.222  test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1
00:18:11.222  fio-3.35
00:18:11.222  Starting 1 thread
00:18:15.410  
00:18:15.410  test: (groupid=0, jobs=1): err= 0: pid=76696: Mon Dec  9 17:07:38 2024
00:18:15.410    read: IOPS=1184, BW=78.6MiB/s (82.5MB/s)(255MiB/3237msec)
00:18:15.410      slat (nsec): min=4144, max=24266, avg=5861.83, stdev=2161.84
00:18:15.410      clat (usec): min=249, max=1161, avg=377.22, stdev=107.69
00:18:15.410       lat (usec): min=254, max=1166, avg=383.09, stdev=108.26
00:18:15.410      clat percentiles (usec):
00:18:15.410       |  1.00th=[  289],  5.00th=[  297], 10.00th=[  297], 20.00th=[  302],
00:18:15.410       | 30.00th=[  314], 40.00th=[  326], 50.00th=[  330], 60.00th=[  338],
00:18:15.410       | 70.00th=[  396], 80.00th=[  445], 90.00th=[  529], 95.00th=[  603],
00:18:15.410       | 99.00th=[  791], 99.50th=[  865], 99.90th=[  988], 99.95th=[ 1074],
00:18:15.410       | 99.99th=[ 1156]
00:18:15.410    write: IOPS=1192, BW=79.2MiB/s (83.0MB/s)(256MiB/3234msec); 0 zone resets
00:18:15.410      slat (usec): min=14, max=134, avg=21.55, stdev= 4.50
00:18:15.410      clat (usec): min=281, max=2093, avg=423.22, stdev=138.34
00:18:15.410       lat (usec): min=305, max=2115, avg=444.77, stdev=138.81
00:18:15.410      clat percentiles (usec):
00:18:15.410       |  1.00th=[  310],  5.00th=[  314], 10.00th=[  318], 20.00th=[  326],
00:18:15.410       | 30.00th=[  347], 40.00th=[  355], 50.00th=[  367], 60.00th=[  388],
00:18:15.410       | 70.00th=[  453], 80.00th=[  494], 90.00th=[  611], 95.00th=[  693],
00:18:15.410       | 99.00th=[  947], 99.50th=[ 1123], 99.90th=[ 1385], 99.95th=[ 1663],
00:18:15.410       | 99.99th=[ 2089]
00:18:15.410     bw (  KiB/s): min=66504, max=96968, per=99.11%, avg=80353.33, stdev=11163.74, samples=6
00:18:15.410     iops        : min=  978, max= 1426, avg=1181.67, stdev=164.17, samples=6
00:18:15.410    lat (usec)   : 250=0.01%, 500=84.77%, 750=12.91%, 1000=1.90%
00:18:15.410    lat (msec)   : 2=0.39%, 4=0.01%
00:18:15.410    cpu          : usr=99.23%, sys=0.12%, ctx=6, majf=0, minf=1169
00:18:15.410    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:18:15.410       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:15.410       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:15.410       issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:15.410       latency   : target=0, window=0, percentile=100.00%, depth=1
00:18:15.410  
00:18:15.410  Run status group 0 (all jobs):
00:18:15.410     READ: bw=78.6MiB/s (82.5MB/s), 78.6MiB/s-78.6MiB/s (82.5MB/s-82.5MB/s), io=255MiB (267MB), run=3237-3237msec
00:18:15.410    WRITE: bw=79.2MiB/s (83.0MB/s), 79.2MiB/s-79.2MiB/s (83.0MB/s-83.0MB/s), io=256MiB (269MB), run=3234-3234msec
00:18:16.784  -----------------------------------------------------
00:18:16.784  Suppressions used:
00:18:16.784    count      bytes template
00:18:16.784        1          5 /usr/src/fio/parse.c
00:18:16.784        1          8 libtcmalloc_minimal.so
00:18:16.784        1        904 libcrypto.so
00:18:16.784  -----------------------------------------------------
00:18:16.784  
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:17.042    17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:17.042    17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:18:17.042    17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:17.042   17:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:18:17.042  first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:18:17.042  second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:18:17.042  fio-3.35
00:18:17.042  Starting 2 threads
00:18:43.665  
00:18:43.665  first_half: (groupid=0, jobs=1): err= 0: pid=76788: Mon Dec  9 17:08:05 2024
00:18:43.665    read: IOPS=2728, BW=10.7MiB/s (11.2MB/s)(255MiB/23960msec)
00:18:43.665      slat (nsec): min=3047, max=37410, avg=3965.20, stdev=886.90
00:18:43.665      clat (usec): min=633, max=313256, avg=34372.66, stdev=19116.54
00:18:43.665       lat (usec): min=637, max=313260, avg=34376.63, stdev=19116.62
00:18:43.665      clat percentiles (msec):
00:18:43.665       |  1.00th=[    9],  5.00th=[   28], 10.00th=[   29], 20.00th=[   30],
00:18:43.665       | 30.00th=[   31], 40.00th=[   31], 50.00th=[   31], 60.00th=[   32],
00:18:43.665       | 70.00th=[   33], 80.00th=[   35], 90.00th=[   39], 95.00th=[   44],
00:18:43.665       | 99.00th=[  138], 99.50th=[  165], 99.90th=[  279], 99.95th=[  296],
00:18:43.665       | 99.99th=[  305]
00:18:43.665    write: IOPS=2899, BW=11.3MiB/s (11.9MB/s)(256MiB/22602msec); 0 zone resets
00:18:43.665      slat (usec): min=3, max=592, avg= 6.05, stdev= 5.10
00:18:43.665      clat (usec): min=368, max=134026, avg=12460.75, stdev=23317.43
00:18:43.665       lat (usec): min=374, max=134034, avg=12466.80, stdev=23317.79
00:18:43.665      clat percentiles (usec):
00:18:43.665       |  1.00th=[   709],  5.00th=[   914], 10.00th=[  1172], 20.00th=[  2114],
00:18:43.665       | 30.00th=[  3130], 40.00th=[  4293], 50.00th=[  5014], 60.00th=[  5538],
00:18:43.665       | 70.00th=[  6259], 80.00th=[ 10945], 90.00th=[ 31851], 95.00th=[ 74974],
00:18:43.665       | 99.00th=[114820], 99.50th=[123208], 99.90th=[129500], 99.95th=[131597],
00:18:43.665       | 99.99th=[133694]
00:18:43.665     bw (  KiB/s): min=  407, max=45400, per=80.72%, avg=18724.54, stdev=14968.06, samples=28
00:18:43.665     iops        : min=  101, max=11350, avg=4681.11, stdev=3742.05, samples=28
00:18:43.665    lat (usec)   : 500=0.02%, 750=0.88%, 1000=2.51%
00:18:43.665    lat (msec)   : 2=6.29%, 4=9.13%, 10=20.76%, 20=5.73%, 50=48.52%
00:18:43.665    lat (msec)   : 100=4.43%, 250=1.65%, 500=0.08%
00:18:43.665    cpu          : usr=99.32%, sys=0.15%, ctx=58, majf=0, minf=5572
00:18:43.665    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:18:43.665       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:43.665       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:18:43.665       issued rwts: total=65382,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:43.665       latency   : target=0, window=0, percentile=100.00%, depth=128
00:18:43.665  second_half: (groupid=0, jobs=1): err= 0: pid=76789: Mon Dec  9 17:08:05 2024
00:18:43.665    read: IOPS=2712, BW=10.6MiB/s (11.1MB/s)(255MiB/24031msec)
00:18:43.665      slat (nsec): min=3015, max=52366, avg=4560.09, stdev=1352.84
00:18:43.665      clat (usec): min=537, max=320212, avg=34400.11, stdev=18373.57
00:18:43.665       lat (usec): min=543, max=320216, avg=34404.67, stdev=18373.64
00:18:43.665      clat percentiles (msec):
00:18:43.665       |  1.00th=[    8],  5.00th=[   28], 10.00th=[   29], 20.00th=[   30],
00:18:43.665       | 30.00th=[   30], 40.00th=[   31], 50.00th=[   31], 60.00th=[   32],
00:18:43.665       | 70.00th=[   33], 80.00th=[   35], 90.00th=[   39], 95.00th=[   46],
00:18:43.665       | 99.00th=[  140], 99.50th=[  155], 99.90th=[  205], 99.95th=[  241],
00:18:43.665       | 99.99th=[  313]
00:18:43.665    write: IOPS=3471, BW=13.6MiB/s (14.2MB/s)(256MiB/18879msec); 0 zone resets
00:18:43.665      slat (usec): min=3, max=1856, avg= 7.01, stdev=10.92
00:18:43.665      clat (usec): min=400, max=133309, avg=12695.24, stdev=23266.45
00:18:43.665       lat (usec): min=405, max=133315, avg=12702.25, stdev=23266.67
00:18:43.665      clat percentiles (usec):
00:18:43.666       |  1.00th=[   660],  5.00th=[   816], 10.00th=[   996], 20.00th=[  1319],
00:18:43.666       | 30.00th=[  1844], 40.00th=[  2606], 50.00th=[  3785], 60.00th=[  5276],
00:18:43.666       | 70.00th=[  9372], 80.00th=[ 12780], 90.00th=[ 34866], 95.00th=[ 72877],
00:18:43.666       | 99.00th=[112722], 99.50th=[122160], 99.90th=[128451], 99.95th=[129500],
00:18:43.666       | 99.99th=[132645]
00:18:43.666     bw (  KiB/s): min= 1200, max=63992, per=94.18%, avg=21847.62, stdev=13488.50, samples=24
00:18:43.666     iops        : min=  300, max=15998, avg=5461.88, stdev=3372.11, samples=24
00:18:43.666    lat (usec)   : 500=0.05%, 750=1.36%, 1000=3.74%
00:18:43.666    lat (msec)   : 2=11.27%, 4=9.63%, 10=10.69%, 20=7.84%, 50=48.95%
00:18:43.666    lat (msec)   : 100=4.57%, 250=1.90%, 500=0.02%
00:18:43.666    cpu          : usr=99.06%, sys=0.21%, ctx=92, majf=0, minf=5547
00:18:43.666    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:18:43.666       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:43.666       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:18:43.666       issued rwts: total=65183,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:43.666       latency   : target=0, window=0, percentile=100.00%, depth=128
00:18:43.666  
00:18:43.666  Run status group 0 (all jobs):
00:18:43.666     READ: bw=21.2MiB/s (22.3MB/s), 10.6MiB/s-10.7MiB/s (11.1MB/s-11.2MB/s), io=510MiB (535MB), run=23960-24031msec
00:18:43.666    WRITE: bw=22.7MiB/s (23.8MB/s), 11.3MiB/s-13.6MiB/s (11.9MB/s-14.2MB/s), io=512MiB (537MB), run=18879-22602msec
00:18:44.236  -----------------------------------------------------
00:18:44.236  Suppressions used:
00:18:44.236    count      bytes template
00:18:44.236        2         10 /usr/src/fio/parse.c
00:18:44.236        4        384 /usr/src/fio/iolog.c
00:18:44.236        1          8 libtcmalloc_minimal.so
00:18:44.236        1        904 libcrypto.so
00:18:44.236  -----------------------------------------------------
00:18:44.236  
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:44.236   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:44.236    17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:44.236    17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:18:44.236    17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:44.496   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:44.496   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:44.496   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:18:44.496   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:44.496   17:08:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:18:44.496  test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:18:44.496  fio-3.35
00:18:44.496  Starting 1 thread
00:19:06.439  
00:19:06.439  test: (groupid=0, jobs=1): err= 0: pid=77107: Mon Dec  9 17:08:25 2024
00:19:06.439    read: IOPS=7041, BW=27.5MiB/s (28.8MB/s)(255MiB/9260msec)
00:19:06.439      slat (nsec): min=3180, max=41426, avg=4854.87, stdev=1146.98
00:19:06.439      clat (usec): min=538, max=33951, avg=18169.38, stdev=2794.83
00:19:06.439       lat (usec): min=542, max=33956, avg=18174.24, stdev=2794.90
00:19:06.439      clat percentiles (usec):
00:19:06.439       |  1.00th=[15008],  5.00th=[15270], 10.00th=[15533], 20.00th=[15926],
00:19:06.439       | 30.00th=[16188], 40.00th=[16712], 50.00th=[17433], 60.00th=[18220],
00:19:06.439       | 70.00th=[19006], 80.00th=[20055], 90.00th=[21890], 95.00th=[23987],
00:19:06.439       | 99.00th=[27657], 99.50th=[28705], 99.90th=[30802], 99.95th=[31851],
00:19:06.439       | 99.99th=[33162]
00:19:06.439    write: IOPS=8451, BW=33.0MiB/s (34.6MB/s)(256MiB/7754msec); 0 zone resets
00:19:06.439      slat (usec): min=4, max=589, avg= 8.97, stdev= 5.23
00:19:06.439      clat (usec): min=493, max=119692, avg=15067.16, stdev=16288.85
00:19:06.439       lat (usec): min=498, max=119699, avg=15076.13, stdev=16288.96
00:19:06.439      clat percentiles (usec):
00:19:06.439       |  1.00th=[   766],  5.00th=[  1139], 10.00th=[  1500], 20.00th=[  1876],
00:19:06.439       | 30.00th=[  2343], 40.00th=[  6587], 50.00th=[ 12125], 60.00th=[ 15008],
00:19:06.439       | 70.00th=[ 17433], 80.00th=[ 19792], 90.00th=[ 46924], 95.00th=[ 52691],
00:19:06.439       | 99.00th=[ 62129], 99.50th=[ 66323], 99.90th=[ 89654], 99.95th=[104334],
00:19:06.439       | 99.99th=[116917]
00:19:06.439     bw (  KiB/s): min=15768, max=51704, per=96.92%, avg=32768.00, stdev=7357.05, samples=16
00:19:06.439     iops        : min= 3942, max=12926, avg=8192.00, stdev=1839.26, samples=16
00:19:06.439    lat (usec)   : 500=0.01%, 750=0.43%, 1000=1.35%
00:19:06.439    lat (msec)   : 2=9.82%, 4=7.95%, 10=2.41%, 20=57.84%, 50=16.53%
00:19:06.439    lat (msec)   : 100=3.64%, 250=0.04%
00:19:06.439    cpu          : usr=98.95%, sys=0.25%, ctx=31, majf=0, minf=5565
00:19:06.439    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:19:06.439       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:06.439       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:19:06.439       issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:06.439       latency   : target=0, window=0, percentile=100.00%, depth=128
00:19:06.439  
00:19:06.439  Run status group 0 (all jobs):
00:19:06.439     READ: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=255MiB (267MB), run=9260-9260msec
00:19:06.439    WRITE: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=256MiB (268MB), run=7754-7754msec
00:19:06.439  -----------------------------------------------------
00:19:06.439  Suppressions used:
00:19:06.439    count      bytes template
00:19:06.439        1          5 /usr/src/fio/parse.c
00:19:06.439        2        192 /usr/src/fio/iolog.c
00:19:06.439        1          8 libtcmalloc_minimal.so
00:19:06.439        1        904 libcrypto.so
00:19:06.439  -----------------------------------------------------
00:19:06.439  
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:19:06.439  Remove shared memory files
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58383 /dev/shm/spdk_tgt_trace.pid75434
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f
00:19:06.439  ************************************
00:19:06.439  END TEST ftl_fio_basic
00:19:06.439  ************************************
00:19:06.439  
00:19:06.439  real	1m6.543s
00:19:06.439  user	2m24.971s
00:19:06.439  sys	0m2.955s
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:06.439   17:08:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:19:06.439   17:08:27 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0
00:19:06.439   17:08:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:19:06.439   17:08:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:06.439   17:08:27 ftl -- common/autotest_common.sh@10 -- # set +x
00:19:06.439  ************************************
00:19:06.439  START TEST ftl_bdevperf
00:19:06.439  ************************************
00:19:06.439   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0
00:19:06.439  * Looking for test storage...
00:19:06.439  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1
00:19:06.439    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2
00:19:06.439     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:06.440     17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:06.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.440  		--rc genhtml_branch_coverage=1
00:19:06.440  		--rc genhtml_function_coverage=1
00:19:06.440  		--rc genhtml_legend=1
00:19:06.440  		--rc geninfo_all_blocks=1
00:19:06.440  		--rc geninfo_unexecuted_blocks=1
00:19:06.440  		
00:19:06.440  		'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:06.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.440  		--rc genhtml_branch_coverage=1
00:19:06.440  		--rc genhtml_function_coverage=1
00:19:06.440  		--rc genhtml_legend=1
00:19:06.440  		--rc geninfo_all_blocks=1
00:19:06.440  		--rc geninfo_unexecuted_blocks=1
00:19:06.440  		
00:19:06.440  		'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:06.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.440  		--rc genhtml_branch_coverage=1
00:19:06.440  		--rc genhtml_function_coverage=1
00:19:06.440  		--rc genhtml_legend=1
00:19:06.440  		--rc geninfo_all_blocks=1
00:19:06.440  		--rc geninfo_unexecuted_blocks=1
00:19:06.440  		
00:19:06.440  		'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:06.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:06.440  		--rc genhtml_branch_coverage=1
00:19:06.440  		--rc genhtml_function_coverage=1
00:19:06.440  		--rc genhtml_legend=1
00:19:06.440  		--rc geninfo_all_blocks=1
00:19:06.440  		--rc geninfo_unexecuted_blocks=1
00:19:06.440  		
00:19:06.440  		'
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:19:06.440      17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh
00:19:06.440     17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:19:06.440     17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid=
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:19:06.440    17:08:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append=
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77383
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77383
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77383 ']'
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:06.440  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:06.440   17:08:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:19:06.440  [2024-12-09 17:08:27.809340] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:19:06.440  [2024-12-09 17:08:27.809574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77383 ]
00:19:06.440  [2024-12-09 17:08:27.973193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:06.440  [2024-12-09 17:08:28.081313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:06.440   17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:06.440   17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev
00:19:06.440     17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:19:06.440    17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size
00:19:06.440     17:08:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:19:06.440     17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:19:06.440     17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:06.440     17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:19:06.440     17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:19:06.440      17:08:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:19:06.440     17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:06.440    {
00:19:06.440      "name": "nvme0n1",
00:19:06.440      "aliases": [
00:19:06.440        "73329317-7538-46cb-a96c-f4a276af43cf"
00:19:06.440      ],
00:19:06.440      "product_name": "NVMe disk",
00:19:06.440      "block_size": 4096,
00:19:06.440      "num_blocks": 1310720,
00:19:06.440      "uuid": "73329317-7538-46cb-a96c-f4a276af43cf",
00:19:06.440      "numa_id": -1,
00:19:06.440      "assigned_rate_limits": {
00:19:06.440        "rw_ios_per_sec": 0,
00:19:06.440        "rw_mbytes_per_sec": 0,
00:19:06.440        "r_mbytes_per_sec": 0,
00:19:06.440        "w_mbytes_per_sec": 0
00:19:06.440      },
00:19:06.440      "claimed": true,
00:19:06.440      "claim_type": "read_many_write_one",
00:19:06.440      "zoned": false,
00:19:06.440      "supported_io_types": {
00:19:06.440        "read": true,
00:19:06.440        "write": true,
00:19:06.440        "unmap": true,
00:19:06.440        "flush": true,
00:19:06.440        "reset": true,
00:19:06.440        "nvme_admin": true,
00:19:06.440        "nvme_io": true,
00:19:06.440        "nvme_io_md": false,
00:19:06.440        "write_zeroes": true,
00:19:06.440        "zcopy": false,
00:19:06.440        "get_zone_info": false,
00:19:06.440        "zone_management": false,
00:19:06.440        "zone_append": false,
00:19:06.440        "compare": true,
00:19:06.440        "compare_and_write": false,
00:19:06.440        "abort": true,
00:19:06.440        "seek_hole": false,
00:19:06.440        "seek_data": false,
00:19:06.440        "copy": true,
00:19:06.440        "nvme_iov_md": false
00:19:06.440      },
00:19:06.440      "driver_specific": {
00:19:06.440        "nvme": [
00:19:06.440          {
00:19:06.440            "pci_address": "0000:00:11.0",
00:19:06.440            "trid": {
00:19:06.440              "trtype": "PCIe",
00:19:06.440              "traddr": "0000:00:11.0"
00:19:06.440            },
00:19:06.440            "ctrlr_data": {
00:19:06.440              "cntlid": 0,
00:19:06.440              "vendor_id": "0x1b36",
00:19:06.440              "model_number": "QEMU NVMe Ctrl",
00:19:06.440              "serial_number": "12341",
00:19:06.440              "firmware_revision": "8.0.0",
00:19:06.440              "subnqn": "nqn.2019-08.org.qemu:12341",
00:19:06.440              "oacs": {
00:19:06.440                "security": 0,
00:19:06.440                "format": 1,
00:19:06.440                "firmware": 0,
00:19:06.440                "ns_manage": 1
00:19:06.440              },
00:19:06.440              "multi_ctrlr": false,
00:19:06.440              "ana_reporting": false
00:19:06.440            },
00:19:06.440            "vs": {
00:19:06.440              "nvme_version": "1.4"
00:19:06.440            },
00:19:06.441            "ns_data": {
00:19:06.441              "id": 1,
00:19:06.441              "can_share": false
00:19:06.441            }
00:19:06.441          }
00:19:06.441        ],
00:19:06.441        "mp_policy": "active_passive"
00:19:06.441      }
00:19:06.441    }
00:19:06.441  ]'
00:19:06.441      17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:06.441     17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:19:06.441      17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:06.441     17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720
00:19:06.441     17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:19:06.441     17:08:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120
00:19:06.441    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120
00:19:06.441    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:19:06.441    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols
00:19:06.441     17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:19:06.441     17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:19:06.441    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7a5e2e16-26d6-4a02-912a-ebead97d8fab
00:19:06.441    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores
00:19:06.441    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a5e2e16-26d6-4a02-912a-ebead97d8fab
00:19:06.699     17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:19:06.957    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=8795e123-6f50-498d-b383-99b13feb4ebc
00:19:06.957    17:08:29 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8795e123-6f50-498d-b383-99b13feb4ebc
00:19:07.215   17:08:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.215    17:08:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.215    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0
00:19:07.215    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:19:07.215    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.215    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size=
00:19:07.215     17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.215     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.215     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:07.215     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:19:07.215     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:19:07.215      17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.473     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:07.473    {
00:19:07.473      "name": "2da59e37-3c52-442a-a11b-b3a77ebdedc5",
00:19:07.473      "aliases": [
00:19:07.473        "lvs/nvme0n1p0"
00:19:07.473      ],
00:19:07.473      "product_name": "Logical Volume",
00:19:07.473      "block_size": 4096,
00:19:07.473      "num_blocks": 26476544,
00:19:07.473      "uuid": "2da59e37-3c52-442a-a11b-b3a77ebdedc5",
00:19:07.473      "assigned_rate_limits": {
00:19:07.473        "rw_ios_per_sec": 0,
00:19:07.473        "rw_mbytes_per_sec": 0,
00:19:07.473        "r_mbytes_per_sec": 0,
00:19:07.473        "w_mbytes_per_sec": 0
00:19:07.473      },
00:19:07.473      "claimed": false,
00:19:07.473      "zoned": false,
00:19:07.473      "supported_io_types": {
00:19:07.473        "read": true,
00:19:07.473        "write": true,
00:19:07.473        "unmap": true,
00:19:07.473        "flush": false,
00:19:07.473        "reset": true,
00:19:07.473        "nvme_admin": false,
00:19:07.473        "nvme_io": false,
00:19:07.473        "nvme_io_md": false,
00:19:07.473        "write_zeroes": true,
00:19:07.473        "zcopy": false,
00:19:07.473        "get_zone_info": false,
00:19:07.473        "zone_management": false,
00:19:07.473        "zone_append": false,
00:19:07.473        "compare": false,
00:19:07.473        "compare_and_write": false,
00:19:07.473        "abort": false,
00:19:07.473        "seek_hole": true,
00:19:07.473        "seek_data": true,
00:19:07.473        "copy": false,
00:19:07.473        "nvme_iov_md": false
00:19:07.473      },
00:19:07.473      "driver_specific": {
00:19:07.473        "lvol": {
00:19:07.473          "lvol_store_uuid": "8795e123-6f50-498d-b383-99b13feb4ebc",
00:19:07.473          "base_bdev": "nvme0n1",
00:19:07.473          "thin_provision": true,
00:19:07.473          "num_allocated_clusters": 0,
00:19:07.473          "snapshot": false,
00:19:07.473          "clone": false,
00:19:07.473          "esnap_clone": false
00:19:07.473        }
00:19:07.473      }
00:19:07.473    }
00:19:07.473  ]'
00:19:07.473      17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:07.473     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:19:07.473      17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:07.473     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:19:07.473     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:19:07.473     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:19:07.473    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171
00:19:07.473    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev
00:19:07.473     17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:19:07.731    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:19:07.731    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]]
00:19:07.731     17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.731     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.731     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:07.731     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:19:07.731     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:19:07.731      17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:07.990     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:07.990    {
00:19:07.990      "name": "2da59e37-3c52-442a-a11b-b3a77ebdedc5",
00:19:07.990      "aliases": [
00:19:07.990        "lvs/nvme0n1p0"
00:19:07.990      ],
00:19:07.990      "product_name": "Logical Volume",
00:19:07.990      "block_size": 4096,
00:19:07.990      "num_blocks": 26476544,
00:19:07.990      "uuid": "2da59e37-3c52-442a-a11b-b3a77ebdedc5",
00:19:07.990      "assigned_rate_limits": {
00:19:07.990        "rw_ios_per_sec": 0,
00:19:07.990        "rw_mbytes_per_sec": 0,
00:19:07.990        "r_mbytes_per_sec": 0,
00:19:07.990        "w_mbytes_per_sec": 0
00:19:07.990      },
00:19:07.990      "claimed": false,
00:19:07.990      "zoned": false,
00:19:07.990      "supported_io_types": {
00:19:07.990        "read": true,
00:19:07.990        "write": true,
00:19:07.990        "unmap": true,
00:19:07.990        "flush": false,
00:19:07.990        "reset": true,
00:19:07.990        "nvme_admin": false,
00:19:07.990        "nvme_io": false,
00:19:07.990        "nvme_io_md": false,
00:19:07.990        "write_zeroes": true,
00:19:07.990        "zcopy": false,
00:19:07.990        "get_zone_info": false,
00:19:07.990        "zone_management": false,
00:19:07.990        "zone_append": false,
00:19:07.990        "compare": false,
00:19:07.990        "compare_and_write": false,
00:19:07.990        "abort": false,
00:19:07.990        "seek_hole": true,
00:19:07.990        "seek_data": true,
00:19:07.990        "copy": false,
00:19:07.990        "nvme_iov_md": false
00:19:07.990      },
00:19:07.990      "driver_specific": {
00:19:07.990        "lvol": {
00:19:07.990          "lvol_store_uuid": "8795e123-6f50-498d-b383-99b13feb4ebc",
00:19:07.990          "base_bdev": "nvme0n1",
00:19:07.990          "thin_provision": true,
00:19:07.990          "num_allocated_clusters": 0,
00:19:07.990          "snapshot": false,
00:19:07.990          "clone": false,
00:19:07.990          "esnap_clone": false
00:19:07.990        }
00:19:07.990      }
00:19:07.990    }
00:19:07.990  ]'
00:19:07.990      17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:07.990     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:19:07.990      17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:07.990     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:19:07.990     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:19:07.990     17:08:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:19:07.990    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171
00:19:07.990    17:08:30 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:19:08.248   17:08:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0
00:19:08.248    17:08:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:08.248    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:08.248    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:08.248    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:19:08.248    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:19:08.248     17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2da59e37-3c52-442a-a11b-b3a77ebdedc5
00:19:08.507    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:08.507    {
00:19:08.507      "name": "2da59e37-3c52-442a-a11b-b3a77ebdedc5",
00:19:08.507      "aliases": [
00:19:08.507        "lvs/nvme0n1p0"
00:19:08.507      ],
00:19:08.507      "product_name": "Logical Volume",
00:19:08.507      "block_size": 4096,
00:19:08.507      "num_blocks": 26476544,
00:19:08.508      "uuid": "2da59e37-3c52-442a-a11b-b3a77ebdedc5",
00:19:08.508      "assigned_rate_limits": {
00:19:08.508        "rw_ios_per_sec": 0,
00:19:08.508        "rw_mbytes_per_sec": 0,
00:19:08.508        "r_mbytes_per_sec": 0,
00:19:08.508        "w_mbytes_per_sec": 0
00:19:08.508      },
00:19:08.508      "claimed": false,
00:19:08.508      "zoned": false,
00:19:08.508      "supported_io_types": {
00:19:08.508        "read": true,
00:19:08.508        "write": true,
00:19:08.508        "unmap": true,
00:19:08.508        "flush": false,
00:19:08.508        "reset": true,
00:19:08.508        "nvme_admin": false,
00:19:08.508        "nvme_io": false,
00:19:08.508        "nvme_io_md": false,
00:19:08.508        "write_zeroes": true,
00:19:08.508        "zcopy": false,
00:19:08.508        "get_zone_info": false,
00:19:08.508        "zone_management": false,
00:19:08.508        "zone_append": false,
00:19:08.508        "compare": false,
00:19:08.508        "compare_and_write": false,
00:19:08.508        "abort": false,
00:19:08.508        "seek_hole": true,
00:19:08.508        "seek_data": true,
00:19:08.508        "copy": false,
00:19:08.508        "nvme_iov_md": false
00:19:08.508      },
00:19:08.508      "driver_specific": {
00:19:08.508        "lvol": {
00:19:08.508          "lvol_store_uuid": "8795e123-6f50-498d-b383-99b13feb4ebc",
00:19:08.508          "base_bdev": "nvme0n1",
00:19:08.508          "thin_provision": true,
00:19:08.508          "num_allocated_clusters": 0,
00:19:08.508          "snapshot": false,
00:19:08.508          "clone": false,
00:19:08.508          "esnap_clone": false
00:19:08.508        }
00:19:08.508      }
00:19:08.508    }
00:19:08.508  ]'
00:19:08.508     17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:08.508    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:19:08.508     17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:08.508    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:19:08.508    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:19:08.508    17:08:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:19:08.508   17:08:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20
00:19:08.508   17:08:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2da59e37-3c52-442a-a11b-b3a77ebdedc5 -c nvc0n1p0 --l2p_dram_limit 20
00:19:08.508  [2024-12-09 17:08:31.507798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.507856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:19:08.508  [2024-12-09 17:08:31.507869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:19:08.508  [2024-12-09 17:08:31.507878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.507920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.507930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:19:08.508  [2024-12-09 17:08:31.507937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:19:08.508  [2024-12-09 17:08:31.507944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.507958] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:19:08.508  [2024-12-09 17:08:31.508518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:19:08.508  [2024-12-09 17:08:31.508546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.508554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:19:08.508  [2024-12-09 17:08:31.508562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.592 ms
00:19:08.508  [2024-12-09 17:08:31.508570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.508622] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5581d5bc-afae-4281-9bea-700d0fcbbbdc
00:19:08.508  [2024-12-09 17:08:31.509897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.510019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:19:08.508  [2024-12-09 17:08:31.510039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:19:08.508  [2024-12-09 17:08:31.510048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.516889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.516974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:19:08.508  [2024-12-09 17:08:31.517019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.800 ms
00:19:08.508  [2024-12-09 17:08:31.517039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.517120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.517267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:19:08.508  [2024-12-09 17:08:31.517293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.056 ms
00:19:08.508  [2024-12-09 17:08:31.517310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.517356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.517376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:19:08.508  [2024-12-09 17:08:31.517430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:19:08.508  [2024-12-09 17:08:31.517448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.517477] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:19:08.508  [2024-12-09 17:08:31.520733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.520828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:19:08.508  [2024-12-09 17:08:31.520891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.264 ms
00:19:08.508  [2024-12-09 17:08:31.520945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.520989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.521033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:19:08.508  [2024-12-09 17:08:31.521051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:19:08.508  [2024-12-09 17:08:31.521068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.521096] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:19:08.508  [2024-12-09 17:08:31.521271] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:19:08.508  [2024-12-09 17:08:31.521345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:19:08.508  [2024-12-09 17:08:31.521417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:19:08.508  [2024-12-09 17:08:31.521444] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:19:08.508  [2024-12-09 17:08:31.521470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:19:08.508  [2024-12-09 17:08:31.521553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:19:08.508  [2024-12-09 17:08:31.521572] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:19:08.508  [2024-12-09 17:08:31.521587] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:19:08.508  [2024-12-09 17:08:31.521603] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:19:08.508  [2024-12-09 17:08:31.521621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.521639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:19:08.508  [2024-12-09 17:08:31.521694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.527 ms
00:19:08.508  [2024-12-09 17:08:31.521714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.521809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.508  [2024-12-09 17:08:31.521860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:19:08.508  [2024-12-09 17:08:31.521877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.061 ms
00:19:08.508  [2024-12-09 17:08:31.521944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.508  [2024-12-09 17:08:31.522030] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:19:08.508  [2024-12-09 17:08:31.522123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:19:08.508  [2024-12-09 17:08:31.522143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:19:08.508  [2024-12-09 17:08:31.522160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:08.508  [2024-12-09 17:08:31.522176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:19:08.508  [2024-12-09 17:08:31.522192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:19:08.508  [2024-12-09 17:08:31.522263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:19:08.508  [2024-12-09 17:08:31.522282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:19:08.508  [2024-12-09 17:08:31.522297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:19:08.508  [2024-12-09 17:08:31.522314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:19:08.508  [2024-12-09 17:08:31.522329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:19:08.508  [2024-12-09 17:08:31.522378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:19:08.508  [2024-12-09 17:08:31.522394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:19:08.508  [2024-12-09 17:08:31.522410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:19:08.508  [2024-12-09 17:08:31.522424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:19:08.508  [2024-12-09 17:08:31.522441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:08.508  [2024-12-09 17:08:31.522481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:19:08.508  [2024-12-09 17:08:31.522501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:19:08.508  [2024-12-09 17:08:31.522516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:08.508  [2024-12-09 17:08:31.522630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:19:08.508  [2024-12-09 17:08:31.522648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:19:08.508  [2024-12-09 17:08:31.522664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:08.508  [2024-12-09 17:08:31.522679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:19:08.509  [2024-12-09 17:08:31.522696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:19:08.509  [2024-12-09 17:08:31.522710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:08.509  [2024-12-09 17:08:31.522726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:19:08.509  [2024-12-09 17:08:31.522804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:19:08.509  [2024-12-09 17:08:31.522823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:08.509  [2024-12-09 17:08:31.522836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:19:08.509  [2024-12-09 17:08:31.522876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:19:08.509  [2024-12-09 17:08:31.522891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:08.509  [2024-12-09 17:08:31.522910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:19:08.509  [2024-12-09 17:08:31.522925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:19:08.509  [2024-12-09 17:08:31.523001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:19:08.509  [2024-12-09 17:08:31.523018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:19:08.509  [2024-12-09 17:08:31.523035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:19:08.509  [2024-12-09 17:08:31.523050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:19:08.509  [2024-12-09 17:08:31.523066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:19:08.509  [2024-12-09 17:08:31.523080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:19:08.509  [2024-12-09 17:08:31.523129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:08.509  [2024-12-09 17:08:31.523145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:19:08.509  [2024-12-09 17:08:31.523162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:19:08.509  [2024-12-09 17:08:31.523177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:08.509  [2024-12-09 17:08:31.523192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:19:08.509  [2024-12-09 17:08:31.523213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:19:08.509  [2024-12-09 17:08:31.523230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:19:08.509  [2024-12-09 17:08:31.523275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:08.509  [2024-12-09 17:08:31.523296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:19:08.509  [2024-12-09 17:08:31.523311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:19:08.509  [2024-12-09 17:08:31.523326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:19:08.509  [2024-12-09 17:08:31.523341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:19:08.509  [2024-12-09 17:08:31.523356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:19:08.509  [2024-12-09 17:08:31.523370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:19:08.509  [2024-12-09 17:08:31.523432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:19:08.509  [2024-12-09 17:08:31.523457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.523482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:19:08.509  [2024-12-09 17:08:31.523503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:19:08.509  [2024-12-09 17:08:31.523559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:19:08.509  [2024-12-09 17:08:31.523582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:19:08.509  [2024-12-09 17:08:31.523695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:19:08.509  [2024-12-09 17:08:31.523717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:19:08.509  [2024-12-09 17:08:31.523742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:19:08.509  [2024-12-09 17:08:31.523764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:19:08.509  [2024-12-09 17:08:31.523790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:19:08.509  [2024-12-09 17:08:31.523886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.523912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.523934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.523957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.523980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:19:08.509  [2024-12-09 17:08:31.524003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:19:08.509  [2024-12-09 17:08:31.524066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.524095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:19:08.509  [2024-12-09 17:08:31.524117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:19:08.509  [2024-12-09 17:08:31.524140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:19:08.509  [2024-12-09 17:08:31.524162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:19:08.509  [2024-12-09 17:08:31.524219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:08.509  [2024-12-09 17:08:31.524240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:19:08.509  [2024-12-09 17:08:31.524257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.237 ms
00:19:08.509  [2024-12-09 17:08:31.524272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:08.509  [2024-12-09 17:08:31.524314] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:19:08.509  [2024-12-09 17:08:31.524361] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:19:11.793  [2024-12-09 17:08:34.420620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.420904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:19:11.793  [2024-12-09 17:08:34.420992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2896.291 ms
00:19:11.793  [2024-12-09 17:08:34.421019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.449229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.449387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:19:11.793  [2024-12-09 17:08:34.449449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.879 ms
00:19:11.793  [2024-12-09 17:08:34.449474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.449611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.449747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:19:11.793  [2024-12-09 17:08:34.449778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:19:11.793  [2024-12-09 17:08:34.449799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.500683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.500827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:19:11.793  [2024-12-09 17:08:34.500917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 50.820 ms
00:19:11.793  [2024-12-09 17:08:34.501216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.501295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.501404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:19:11.793  [2024-12-09 17:08:34.501507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:19:11.793  [2024-12-09 17:08:34.501522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.501985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.502002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:19:11.793  [2024-12-09 17:08:34.502014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.408 ms
00:19:11.793  [2024-12-09 17:08:34.502023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.502133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.502143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:19:11.793  [2024-12-09 17:08:34.502156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.091 ms
00:19:11.793  [2024-12-09 17:08:34.502164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.516512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.516639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:19:11.793  [2024-12-09 17:08:34.516657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.328 ms
00:19:11.793  [2024-12-09 17:08:34.516672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.529014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB
00:19:11.793  [2024-12-09 17:08:34.535043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.535160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:19:11.793  [2024-12-09 17:08:34.535175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.294 ms
00:19:11.793  [2024-12-09 17:08:34.535185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.610836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.610888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:19:11.793  [2024-12-09 17:08:34.610900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 75.628 ms
00:19:11.793  [2024-12-09 17:08:34.610911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.611076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.611092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:19:11.793  [2024-12-09 17:08:34.611101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.144 ms
00:19:11.793  [2024-12-09 17:08:34.611114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.633723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.633758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:19:11.793  [2024-12-09 17:08:34.633769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.571 ms
00:19:11.793  [2024-12-09 17:08:34.633779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.656128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.656161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:19:11.793  [2024-12-09 17:08:34.656172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.330 ms
00:19:11.793  [2024-12-09 17:08:34.656182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.656767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.656794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:19:11.793  [2024-12-09 17:08:34.656803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.568 ms
00:19:11.793  [2024-12-09 17:08:34.656812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.731011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.731049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:19:11.793  [2024-12-09 17:08:34.731060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 74.170 ms
00:19:11.793  [2024-12-09 17:08:34.731070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.755830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.755873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:19:11.793  [2024-12-09 17:08:34.755887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.682 ms
00:19:11.793  [2024-12-09 17:08:34.755910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.778543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.778577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:19:11.793  [2024-12-09 17:08:34.778587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.601 ms
00:19:11.793  [2024-12-09 17:08:34.778596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.801528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.801566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:19:11.793  [2024-12-09 17:08:34.801577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.901 ms
00:19:11.793  [2024-12-09 17:08:34.801586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.801619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.801634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:19:11.793  [2024-12-09 17:08:34.801643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:19:11.793  [2024-12-09 17:08:34.801652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.801729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:11.793  [2024-12-09 17:08:34.801741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:19:11.793  [2024-12-09 17:08:34.801750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.035 ms
00:19:11.793  [2024-12-09 17:08:34.801759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:11.793  [2024-12-09 17:08:34.802699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3294.455 ms, result 0
00:19:11.793  {
00:19:11.793    "name": "ftl0",
00:19:11.793    "uuid": "5581d5bc-afae-4281-9bea-700d0fcbbbdc"
00:19:11.793  }
00:19:11.793   17:08:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0
00:19:11.793   17:08:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0
00:19:11.793   17:08:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name
00:19:12.052   17:08:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632
00:19:12.052  [2024-12-09 17:08:35.074919] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:19:12.052  I/O size of 69632 is greater than zero copy threshold (65536).
00:19:12.052  Zero copy mechanism will not be used.
00:19:12.052  Running I/O for 4 seconds...
00:19:14.373        847.00 IOPS,    56.25 MiB/s
[2024-12-09T17:08:38.346Z]       910.50 IOPS,    60.46 MiB/s
[2024-12-09T17:08:39.279Z]       968.00 IOPS,    64.28 MiB/s
[2024-12-09T17:08:39.279Z]      1168.50 IOPS,    77.60 MiB/s
00:19:16.238                                                                                                  Latency(us)
00:19:16.238  
[2024-12-09T17:08:39.279Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:16.238  Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632)
00:19:16.238  	 ftl0                :       4.00    1168.21      77.58       0.00     0.00     902.70     188.26    2986.93
00:19:16.238  
[2024-12-09T17:08:39.279Z]  ===================================================================================================================
00:19:16.238  
[2024-12-09T17:08:39.279Z]  Total                       :               1168.21      77.58       0.00     0.00     902.70     188.26    2986.93
00:19:16.238  [2024-12-09 17:08:39.084722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:19:16.238  {
00:19:16.238    "results": [
00:19:16.238      {
00:19:16.238        "job": "ftl0",
00:19:16.238        "core_mask": "0x1",
00:19:16.238        "workload": "randwrite",
00:19:16.238        "status": "finished",
00:19:16.238        "queue_depth": 1,
00:19:16.238        "io_size": 69632,
00:19:16.238        "runtime": 4.001836,
00:19:16.238        "iops": 1168.2137898704495,
00:19:16.238        "mibps": 77.57669698358454,
00:19:16.238        "io_failed": 0,
00:19:16.238        "io_timeout": 0,
00:19:16.238        "avg_latency_us": 902.704314603044,
00:19:16.238        "min_latency_us": 188.25846153846155,
00:19:16.238        "max_latency_us": 2986.929230769231
00:19:16.238      }
00:19:16.238    ],
00:19:16.238    "core_count": 1
00:19:16.238  }
00:19:16.238   17:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096
00:19:16.238  [2024-12-09 17:08:39.204302] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:19:16.238  Running I/O for 4 seconds...
00:19:18.589       7986.00 IOPS,    31.20 MiB/s
[2024-12-09T17:08:42.563Z]      7040.50 IOPS,    27.50 MiB/s
[2024-12-09T17:08:43.501Z]      7016.33 IOPS,    27.41 MiB/s
[2024-12-09T17:08:43.501Z]      7159.50 IOPS,    27.97 MiB/s
00:19:20.460                                                                                                  Latency(us)
00:19:20.460  
[2024-12-09T17:08:43.501Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:20.460  Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096)
00:19:20.460  	 ftl0                :       4.01    7168.43      28.00       0.00     0.00   17825.57     239.46   98404.82
00:19:20.460  
[2024-12-09T17:08:43.501Z]  ===================================================================================================================
00:19:20.460  
[2024-12-09T17:08:43.501Z]  Total                       :               7168.43      28.00       0.00     0.00   17825.57       0.00   98404.82
00:19:20.460  [2024-12-09 17:08:43.225037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:19:20.460  {
00:19:20.460    "results": [
00:19:20.460      {
00:19:20.460        "job": "ftl0",
00:19:20.460        "core_mask": "0x1",
00:19:20.460        "workload": "randwrite",
00:19:20.460        "status": "finished",
00:19:20.460        "queue_depth": 128,
00:19:20.460        "io_size": 4096,
00:19:20.460        "runtime": 4.012872,
00:19:20.460        "iops": 7168.431985869472,
00:19:20.460        "mibps": 28.001687444802624,
00:19:20.460        "io_failed": 0,
00:19:20.460        "io_timeout": 0,
00:19:20.460        "avg_latency_us": 17825.570544713577,
00:19:20.460        "min_latency_us": 239.45846153846153,
00:19:20.460        "max_latency_us": 98404.82461538461
00:19:20.460      }
00:19:20.460    ],
00:19:20.460    "core_count": 1
00:19:20.460  }
00:19:20.460   17:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096
00:19:20.460  [2024-12-09 17:08:43.336783] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:19:20.460  Running I/O for 4 seconds...
00:19:22.342       6270.00 IOPS,    24.49 MiB/s
[2024-12-09T17:08:46.770Z]      6321.50 IOPS,    24.69 MiB/s
[2024-12-09T17:08:47.714Z]      6024.00 IOPS,    23.53 MiB/s
[2024-12-09T17:08:47.714Z]      5670.75 IOPS,    22.15 MiB/s
00:19:24.673                                                                                                  Latency(us)
00:19:24.673  
[2024-12-09T17:08:47.714Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:24.673  Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:24.673  	 Verification LBA range: start 0x0 length 0x1400000
00:19:24.673  	 ftl0                :       4.02    5680.94      22.19       0.00     0.00   22462.39     237.88   37708.41
00:19:24.673  
[2024-12-09T17:08:47.714Z]  ===================================================================================================================
00:19:24.673  
[2024-12-09T17:08:47.714Z]  Total                       :               5680.94      22.19       0.00     0.00   22462.39       0.00   37708.41
00:19:24.673  [2024-12-09 17:08:47.368255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:19:24.673  {
00:19:24.673    "results": [
00:19:24.673      {
00:19:24.673        "job": "ftl0",
00:19:24.673        "core_mask": "0x1",
00:19:24.673        "workload": "verify",
00:19:24.673        "status": "finished",
00:19:24.673        "verify_range": {
00:19:24.673          "start": 0,
00:19:24.673          "length": 20971520
00:19:24.673        },
00:19:24.673        "queue_depth": 128,
00:19:24.673        "io_size": 4096,
00:19:24.673        "runtime": 4.015357,
00:19:24.673        "iops": 5680.939453204285,
00:19:24.673        "mibps": 22.19116973907924,
00:19:24.673        "io_failed": 0,
00:19:24.673        "io_timeout": 0,
00:19:24.673        "avg_latency_us": 22462.389523003407,
00:19:24.673        "min_latency_us": 237.8830769230769,
00:19:24.673        "max_latency_us": 37708.406153846154
00:19:24.673      }
00:19:24.673    ],
00:19:24.673    "core_count": 1
00:19:24.673  }
00:19:24.673   17:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0
00:19:24.673  [2024-12-09 17:08:47.584102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.673  [2024-12-09 17:08:47.584160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:19:24.673  [2024-12-09 17:08:47.584174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:19:24.673  [2024-12-09 17:08:47.584186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.674  [2024-12-09 17:08:47.584210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:19:24.674  [2024-12-09 17:08:47.587587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.674  [2024-12-09 17:08:47.587629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:19:24.674  [2024-12-09 17:08:47.587644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.356 ms
00:19:24.674  [2024-12-09 17:08:47.587653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.674  [2024-12-09 17:08:47.590896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.674  [2024-12-09 17:08:47.590936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:19:24.674  [2024-12-09 17:08:47.590960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.215 ms
00:19:24.674  [2024-12-09 17:08:47.590971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.812905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.812948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:19:24.936  [2024-12-09 17:08:47.812969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 221.908 ms
00:19:24.936  [2024-12-09 17:08:47.812979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.819146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.819188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:19:24.936  [2024-12-09 17:08:47.819202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.119 ms
00:19:24.936  [2024-12-09 17:08:47.819216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.845229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.845274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:19:24.936  [2024-12-09 17:08:47.845290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.938 ms
00:19:24.936  [2024-12-09 17:08:47.845300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.863462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.863519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:19:24.936  [2024-12-09 17:08:47.863535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.110 ms
00:19:24.936  [2024-12-09 17:08:47.863545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.863718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.863734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:19:24.936  [2024-12-09 17:08:47.863753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.120 ms
00:19:24.936  [2024-12-09 17:08:47.863763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.890133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.890179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:19:24.936  [2024-12-09 17:08:47.890194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.346 ms
00:19:24.936  [2024-12-09 17:08:47.890203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.915673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.915718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:19:24.936  [2024-12-09 17:08:47.915733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.420 ms
00:19:24.936  [2024-12-09 17:08:47.915741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.940553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.940599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:19:24.936  [2024-12-09 17:08:47.940614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.763 ms
00:19:24.936  [2024-12-09 17:08:47.940622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.965097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.936  [2024-12-09 17:08:47.965142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:19:24.936  [2024-12-09 17:08:47.965160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.384 ms
00:19:24.936  [2024-12-09 17:08:47.965168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:24.936  [2024-12-09 17:08:47.965215] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:19:24.936  [2024-12-09 17:08:47.965234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.936  [2024-12-09 17:08:47.965668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.965995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:19:24.937  [2024-12-09 17:08:47.966286] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:19:24.937  [2024-12-09 17:08:47.966299] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         5581d5bc-afae-4281-9bea-700d0fcbbbdc
00:19:24.937  [2024-12-09 17:08:47.966311] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:19:24.937  [2024-12-09 17:08:47.966321] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:19:24.937  [2024-12-09 17:08:47.966329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:19:24.937  [2024-12-09 17:08:47.966339] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:19:24.937  [2024-12-09 17:08:47.966347] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:19:24.937  [2024-12-09 17:08:47.966356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:19:24.937  [2024-12-09 17:08:47.966364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:19:24.937  [2024-12-09 17:08:47.966375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:19:24.937  [2024-12-09 17:08:47.966383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:19:24.937  [2024-12-09 17:08:47.966392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:24.937  [2024-12-09 17:08:47.966401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:19:24.937  [2024-12-09 17:08:47.966412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.180 ms
00:19:24.937  [2024-12-09 17:08:47.966419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:47.980970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:25.199  [2024-12-09 17:08:47.981013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:19:25.199  [2024-12-09 17:08:47.981028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.508 ms
00:19:25.199  [2024-12-09 17:08:47.981038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:47.981471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:25.199  [2024-12-09 17:08:47.981492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:19:25.199  [2024-12-09 17:08:47.981505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.396 ms
00:19:25.199  [2024-12-09 17:08:47.981513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.023374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.023420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:19:25.199  [2024-12-09 17:08:48.023438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.023447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.023515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.023525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:19:25.199  [2024-12-09 17:08:48.023536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.023545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.023628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.023642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:19:25.199  [2024-12-09 17:08:48.023653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.023662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.023682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.023691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:19:25.199  [2024-12-09 17:08:48.023702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.023711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.116174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.116232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:19:25.199  [2024-12-09 17:08:48.116276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.116286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.191802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.191872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:19:25.199  [2024-12-09 17:08:48.191889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.191899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.192065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:19:25.199  [2024-12-09 17:08:48.192077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.192087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.192153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:19:25.199  [2024-12-09 17:08:48.192165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.192174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.192307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:19:25.199  [2024-12-09 17:08:48.192321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.192332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.192394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:19:25.199  [2024-12-09 17:08:48.192405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.192414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.192491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:19:25.199  [2024-12-09 17:08:48.192505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.192536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:25.199  [2024-12-09 17:08:48.192609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:19:25.199  [2024-12-09 17:08:48.192620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:25.199  [2024-12-09 17:08:48.192629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:25.199  [2024-12-09 17:08:48.192804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 608.642 ms, result 0
00:19:25.199  true
00:19:25.199   17:08:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77383
00:19:25.199   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77383 ']'
00:19:25.199   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77383
00:19:25.199    17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname
00:19:25.199   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:25.199    17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77383
00:19:25.461  killing process with pid 77383
00:19:25.461  Received shutdown signal, test time was about 4.000000 seconds
00:19:25.461  
00:19:25.461                                                                                                  Latency(us)
00:19:25.461  
[2024-12-09T17:08:48.502Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:25.461  
[2024-12-09T17:08:48.502Z]  ===================================================================================================================
00:19:25.461  
[2024-12-09T17:08:48.502Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:19:25.461   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:25.461   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:25.461   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77383'
00:19:25.461   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77383
00:19:25.461   17:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77383
00:19:26.405  Remove shared memory files
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f
00:19:26.405  
00:19:26.405  real	0m21.580s
00:19:26.405  user	0m24.170s
00:19:26.405  sys	0m0.900s
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:26.405  ************************************
00:19:26.405   17:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:19:26.405  END TEST ftl_bdevperf
00:19:26.405  ************************************
00:19:26.405   17:08:49 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0
00:19:26.405   17:08:49 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:19:26.405   17:08:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:26.405   17:08:49 ftl -- common/autotest_common.sh@10 -- # set +x
00:19:26.405  ************************************
00:19:26.405  START TEST ftl_trim
00:19:26.405  ************************************
00:19:26.405   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0
00:19:26.405  * Looking for test storage...
00:19:26.405  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:26.405     17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:26.405     17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-:
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-:
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<'
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:26.405     17:08:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:26.405    17:08:49 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:26.405  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:26.405  		--rc genhtml_branch_coverage=1
00:19:26.405  		--rc genhtml_function_coverage=1
00:19:26.405  		--rc genhtml_legend=1
00:19:26.405  		--rc geninfo_all_blocks=1
00:19:26.405  		--rc geninfo_unexecuted_blocks=1
00:19:26.405  		
00:19:26.405  		'
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:26.405  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:26.405  		--rc genhtml_branch_coverage=1
00:19:26.405  		--rc genhtml_function_coverage=1
00:19:26.405  		--rc genhtml_legend=1
00:19:26.405  		--rc geninfo_all_blocks=1
00:19:26.405  		--rc geninfo_unexecuted_blocks=1
00:19:26.405  		
00:19:26.405  		'
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:26.405  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:26.405  		--rc genhtml_branch_coverage=1
00:19:26.405  		--rc genhtml_function_coverage=1
00:19:26.405  		--rc genhtml_legend=1
00:19:26.405  		--rc geninfo_all_blocks=1
00:19:26.405  		--rc geninfo_unexecuted_blocks=1
00:19:26.405  		
00:19:26.405  		'
00:19:26.405    17:08:49 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:26.405  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:26.405  		--rc genhtml_branch_coverage=1
00:19:26.405  		--rc genhtml_function_coverage=1
00:19:26.405  		--rc genhtml_legend=1
00:19:26.405  		--rc geninfo_all_blocks=1
00:19:26.405  		--rc geninfo_unexecuted_blocks=1
00:19:26.405  		
00:19:26.405  		'
00:19:26.405   17:08:49 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:19:26.405      17:08:49 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh
00:19:26.405     17:08:49 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:19:26.405     17:08:49 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid=
00:19:26.405    17:08:49 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:19:26.406    17:08:49 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]]
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=77725
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 77725
00:19:26.406   17:08:49 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:19:26.406   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77725 ']'
00:19:26.406   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:26.406   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:26.406  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:26.406   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:26.406   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:26.406   17:08:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:19:26.668  [2024-12-09 17:08:49.512561] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:19:26.668  [2024-12-09 17:08:49.512955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77725 ]
00:19:26.668  [2024-12-09 17:08:49.677549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:19:26.928  [2024-12-09 17:08:49.828508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:26.928  [2024-12-09 17:08:49.828911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:19:26.928  [2024-12-09 17:08:49.828920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:27.868   17:08:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:27.868   17:08:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:19:27.868    17:08:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:19:27.868    17:08:50 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0
00:19:27.868    17:08:50 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:19:27.868    17:08:50 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424
00:19:27.868    17:08:50 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev
00:19:27.868     17:08:50 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:19:28.128    17:08:50 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:19:28.128    17:08:50 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size
00:19:28.128     17:08:50 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:19:28.128     17:08:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:19:28.128     17:08:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:28.128     17:08:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:19:28.128     17:08:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:19:28.128      17:08:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:19:28.128     17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:28.128    {
00:19:28.128      "name": "nvme0n1",
00:19:28.128      "aliases": [
00:19:28.128        "09e6e95e-b500-4b46-94ec-1e73c145927b"
00:19:28.128      ],
00:19:28.128      "product_name": "NVMe disk",
00:19:28.128      "block_size": 4096,
00:19:28.128      "num_blocks": 1310720,
00:19:28.128      "uuid": "09e6e95e-b500-4b46-94ec-1e73c145927b",
00:19:28.128      "numa_id": -1,
00:19:28.128      "assigned_rate_limits": {
00:19:28.128        "rw_ios_per_sec": 0,
00:19:28.128        "rw_mbytes_per_sec": 0,
00:19:28.128        "r_mbytes_per_sec": 0,
00:19:28.128        "w_mbytes_per_sec": 0
00:19:28.128      },
00:19:28.128      "claimed": true,
00:19:28.128      "claim_type": "read_many_write_one",
00:19:28.128      "zoned": false,
00:19:28.128      "supported_io_types": {
00:19:28.128        "read": true,
00:19:28.128        "write": true,
00:19:28.128        "unmap": true,
00:19:28.128        "flush": true,
00:19:28.128        "reset": true,
00:19:28.128        "nvme_admin": true,
00:19:28.128        "nvme_io": true,
00:19:28.128        "nvme_io_md": false,
00:19:28.128        "write_zeroes": true,
00:19:28.128        "zcopy": false,
00:19:28.128        "get_zone_info": false,
00:19:28.128        "zone_management": false,
00:19:28.128        "zone_append": false,
00:19:28.128        "compare": true,
00:19:28.128        "compare_and_write": false,
00:19:28.128        "abort": true,
00:19:28.128        "seek_hole": false,
00:19:28.128        "seek_data": false,
00:19:28.128        "copy": true,
00:19:28.128        "nvme_iov_md": false
00:19:28.128      },
00:19:28.128      "driver_specific": {
00:19:28.128        "nvme": [
00:19:28.128          {
00:19:28.128            "pci_address": "0000:00:11.0",
00:19:28.128            "trid": {
00:19:28.128              "trtype": "PCIe",
00:19:28.128              "traddr": "0000:00:11.0"
00:19:28.128            },
00:19:28.128            "ctrlr_data": {
00:19:28.128              "cntlid": 0,
00:19:28.128              "vendor_id": "0x1b36",
00:19:28.128              "model_number": "QEMU NVMe Ctrl",
00:19:28.128              "serial_number": "12341",
00:19:28.128              "firmware_revision": "8.0.0",
00:19:28.128              "subnqn": "nqn.2019-08.org.qemu:12341",
00:19:28.128              "oacs": {
00:19:28.128                "security": 0,
00:19:28.128                "format": 1,
00:19:28.128                "firmware": 0,
00:19:28.128                "ns_manage": 1
00:19:28.128              },
00:19:28.128              "multi_ctrlr": false,
00:19:28.128              "ana_reporting": false
00:19:28.128            },
00:19:28.128            "vs": {
00:19:28.128              "nvme_version": "1.4"
00:19:28.128            },
00:19:28.128            "ns_data": {
00:19:28.128              "id": 1,
00:19:28.128              "can_share": false
00:19:28.128            }
00:19:28.128          }
00:19:28.128        ],
00:19:28.128        "mp_policy": "active_passive"
00:19:28.128      }
00:19:28.128    }
00:19:28.128  ]'
00:19:28.128      17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:28.128     17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:19:28.128      17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:28.389     17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720
00:19:28.389     17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:19:28.389     17:08:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120
00:19:28.389    17:08:51 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120
00:19:28.389    17:08:51 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:19:28.389    17:08:51 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols
00:19:28.389     17:08:51 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:19:28.389     17:08:51 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:19:28.389    17:08:51 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=8795e123-6f50-498d-b383-99b13feb4ebc
00:19:28.389    17:08:51 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores
00:19:28.389    17:08:51 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8795e123-6f50-498d-b383-99b13feb4ebc
00:19:28.650     17:08:51 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:19:28.911    17:08:51 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=15d4bce2-aacc-4bab-aa28-3868b9920eae
00:19:28.911    17:08:51 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15d4bce2-aacc-4bab-aa28-3868b9920eae
00:19:29.172   17:08:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.172    17:08:52 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.172    17:08:52 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0
00:19:29.172    17:08:52 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:19:29.172    17:08:52 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.172    17:08:52 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size=
00:19:29.172     17:08:52 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.172     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.172     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:29.172     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:19:29.172     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:19:29.172      17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.433     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:29.433    {
00:19:29.433      "name": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:29.433      "aliases": [
00:19:29.433        "lvs/nvme0n1p0"
00:19:29.433      ],
00:19:29.433      "product_name": "Logical Volume",
00:19:29.433      "block_size": 4096,
00:19:29.433      "num_blocks": 26476544,
00:19:29.433      "uuid": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:29.433      "assigned_rate_limits": {
00:19:29.433        "rw_ios_per_sec": 0,
00:19:29.433        "rw_mbytes_per_sec": 0,
00:19:29.433        "r_mbytes_per_sec": 0,
00:19:29.433        "w_mbytes_per_sec": 0
00:19:29.433      },
00:19:29.434      "claimed": false,
00:19:29.434      "zoned": false,
00:19:29.434      "supported_io_types": {
00:19:29.434        "read": true,
00:19:29.434        "write": true,
00:19:29.434        "unmap": true,
00:19:29.434        "flush": false,
00:19:29.434        "reset": true,
00:19:29.434        "nvme_admin": false,
00:19:29.434        "nvme_io": false,
00:19:29.434        "nvme_io_md": false,
00:19:29.434        "write_zeroes": true,
00:19:29.434        "zcopy": false,
00:19:29.434        "get_zone_info": false,
00:19:29.434        "zone_management": false,
00:19:29.434        "zone_append": false,
00:19:29.434        "compare": false,
00:19:29.434        "compare_and_write": false,
00:19:29.434        "abort": false,
00:19:29.434        "seek_hole": true,
00:19:29.434        "seek_data": true,
00:19:29.434        "copy": false,
00:19:29.434        "nvme_iov_md": false
00:19:29.434      },
00:19:29.434      "driver_specific": {
00:19:29.434        "lvol": {
00:19:29.434          "lvol_store_uuid": "15d4bce2-aacc-4bab-aa28-3868b9920eae",
00:19:29.434          "base_bdev": "nvme0n1",
00:19:29.434          "thin_provision": true,
00:19:29.434          "num_allocated_clusters": 0,
00:19:29.434          "snapshot": false,
00:19:29.434          "clone": false,
00:19:29.434          "esnap_clone": false
00:19:29.434        }
00:19:29.434      }
00:19:29.434    }
00:19:29.434  ]'
00:19:29.434      17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:29.434     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:19:29.434      17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:29.434     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:19:29.434     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:19:29.434     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:19:29.434    17:08:52 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171
00:19:29.434    17:08:52 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev
00:19:29.434     17:08:52 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:19:29.696    17:08:52 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:19:29.696    17:08:52 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]]
00:19:29.696     17:08:52 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.696     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.696     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:29.696     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:19:29.696     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:19:29.696      17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:29.958     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:29.958    {
00:19:29.958      "name": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:29.958      "aliases": [
00:19:29.958        "lvs/nvme0n1p0"
00:19:29.958      ],
00:19:29.958      "product_name": "Logical Volume",
00:19:29.958      "block_size": 4096,
00:19:29.958      "num_blocks": 26476544,
00:19:29.958      "uuid": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:29.958      "assigned_rate_limits": {
00:19:29.958        "rw_ios_per_sec": 0,
00:19:29.958        "rw_mbytes_per_sec": 0,
00:19:29.958        "r_mbytes_per_sec": 0,
00:19:29.958        "w_mbytes_per_sec": 0
00:19:29.958      },
00:19:29.958      "claimed": false,
00:19:29.958      "zoned": false,
00:19:29.958      "supported_io_types": {
00:19:29.958        "read": true,
00:19:29.958        "write": true,
00:19:29.958        "unmap": true,
00:19:29.958        "flush": false,
00:19:29.958        "reset": true,
00:19:29.958        "nvme_admin": false,
00:19:29.958        "nvme_io": false,
00:19:29.958        "nvme_io_md": false,
00:19:29.958        "write_zeroes": true,
00:19:29.958        "zcopy": false,
00:19:29.958        "get_zone_info": false,
00:19:29.958        "zone_management": false,
00:19:29.958        "zone_append": false,
00:19:29.958        "compare": false,
00:19:29.958        "compare_and_write": false,
00:19:29.958        "abort": false,
00:19:29.958        "seek_hole": true,
00:19:29.958        "seek_data": true,
00:19:29.958        "copy": false,
00:19:29.958        "nvme_iov_md": false
00:19:29.958      },
00:19:29.958      "driver_specific": {
00:19:29.958        "lvol": {
00:19:29.958          "lvol_store_uuid": "15d4bce2-aacc-4bab-aa28-3868b9920eae",
00:19:29.958          "base_bdev": "nvme0n1",
00:19:29.958          "thin_provision": true,
00:19:29.958          "num_allocated_clusters": 0,
00:19:29.958          "snapshot": false,
00:19:29.958          "clone": false,
00:19:29.958          "esnap_clone": false
00:19:29.958        }
00:19:29.958      }
00:19:29.958    }
00:19:29.958  ]'
00:19:29.958      17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:29.958     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:19:29.958      17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:29.958     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:19:29.958     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:19:29.958     17:08:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:19:29.958    17:08:52 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171
00:19:29.958    17:08:52 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:19:30.219   17:08:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0
00:19:30.220   17:08:53 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60
00:19:30.220    17:08:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:30.220    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:30.220    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:19:30.220    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:19:30.220    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:19:30.220     17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7dc1c2f-adfa-4653-bcaf-12a70d110a43
00:19:30.480    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:19:30.480    {
00:19:30.480      "name": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:30.480      "aliases": [
00:19:30.480        "lvs/nvme0n1p0"
00:19:30.480      ],
00:19:30.480      "product_name": "Logical Volume",
00:19:30.480      "block_size": 4096,
00:19:30.480      "num_blocks": 26476544,
00:19:30.480      "uuid": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:30.480      "assigned_rate_limits": {
00:19:30.480        "rw_ios_per_sec": 0,
00:19:30.480        "rw_mbytes_per_sec": 0,
00:19:30.480        "r_mbytes_per_sec": 0,
00:19:30.480        "w_mbytes_per_sec": 0
00:19:30.480      },
00:19:30.480      "claimed": false,
00:19:30.480      "zoned": false,
00:19:30.480      "supported_io_types": {
00:19:30.480        "read": true,
00:19:30.480        "write": true,
00:19:30.480        "unmap": true,
00:19:30.480        "flush": false,
00:19:30.480        "reset": true,
00:19:30.480        "nvme_admin": false,
00:19:30.480        "nvme_io": false,
00:19:30.480        "nvme_io_md": false,
00:19:30.480        "write_zeroes": true,
00:19:30.480        "zcopy": false,
00:19:30.480        "get_zone_info": false,
00:19:30.480        "zone_management": false,
00:19:30.480        "zone_append": false,
00:19:30.480        "compare": false,
00:19:30.480        "compare_and_write": false,
00:19:30.480        "abort": false,
00:19:30.480        "seek_hole": true,
00:19:30.480        "seek_data": true,
00:19:30.480        "copy": false,
00:19:30.480        "nvme_iov_md": false
00:19:30.480      },
00:19:30.480      "driver_specific": {
00:19:30.480        "lvol": {
00:19:30.480          "lvol_store_uuid": "15d4bce2-aacc-4bab-aa28-3868b9920eae",
00:19:30.480          "base_bdev": "nvme0n1",
00:19:30.480          "thin_provision": true,
00:19:30.480          "num_allocated_clusters": 0,
00:19:30.480          "snapshot": false,
00:19:30.480          "clone": false,
00:19:30.481          "esnap_clone": false
00:19:30.481        }
00:19:30.481      }
00:19:30.481    }
00:19:30.481  ]'
00:19:30.481     17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:19:30.481    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:19:30.481     17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:19:30.481    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:19:30.481    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:19:30.481    17:08:53 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:19:30.481   17:08:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60
00:19:30.481   17:08:53 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e7dc1c2f-adfa-4653-bcaf-12a70d110a43 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10
00:19:30.743  [2024-12-09 17:08:53.617003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.617045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:19:30.743  [2024-12-09 17:08:53.617060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:19:30.743  [2024-12-09 17:08:53.617067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.619427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.619455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:19:30.743  [2024-12-09 17:08:53.619465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.335 ms
00:19:30.743  [2024-12-09 17:08:53.619471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.619541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:19:30.743  [2024-12-09 17:08:53.620113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:19:30.743  [2024-12-09 17:08:53.620137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.620144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:19:30.743  [2024-12-09 17:08:53.620153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.600 ms
00:19:30.743  [2024-12-09 17:08:53.620160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.620247] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fc2373b3-a810-4875-b935-5ccc0d51d98c
00:19:30.743  [2024-12-09 17:08:53.621536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.621563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:19:30.743  [2024-12-09 17:08:53.621573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:19:30.743  [2024-12-09 17:08:53.621581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.628501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.628530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:19:30.743  [2024-12-09 17:08:53.628541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.864 ms
00:19:30.743  [2024-12-09 17:08:53.628549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.628648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.628659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:19:30.743  [2024-12-09 17:08:53.628665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.062 ms
00:19:30.743  [2024-12-09 17:08:53.628676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.628701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.628710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:19:30.743  [2024-12-09 17:08:53.628717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:19:30.743  [2024-12-09 17:08:53.628727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.628750] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:19:30.743  [2024-12-09 17:08:53.632019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.632045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:19:30.743  [2024-12-09 17:08:53.632055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.270 ms
00:19:30.743  [2024-12-09 17:08:53.632062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.632106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.632124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:19:30.743  [2024-12-09 17:08:53.632132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:19:30.743  [2024-12-09 17:08:53.632138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.632161] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:19:30.743  [2024-12-09 17:08:53.632277] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:19:30.743  [2024-12-09 17:08:53.632290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:19:30.743  [2024-12-09 17:08:53.632300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:19:30.743  [2024-12-09 17:08:53.632309] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632316] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632324] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:19:30.743  [2024-12-09 17:08:53.632329] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:19:30.743  [2024-12-09 17:08:53.632338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:19:30.743  [2024-12-09 17:08:53.632345] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:19:30.743  [2024-12-09 17:08:53.632352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.632358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:19:30.743  [2024-12-09 17:08:53.632365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.192 ms
00:19:30.743  [2024-12-09 17:08:53.632372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.632445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.743  [2024-12-09 17:08:53.632451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:19:30.743  [2024-12-09 17:08:53.632459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:19:30.743  [2024-12-09 17:08:53.632465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.743  [2024-12-09 17:08:53.632572] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:19:30.743  [2024-12-09 17:08:53.632665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:19:30.743  [2024-12-09 17:08:53.632674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:19:30.743  [2024-12-09 17:08:53.632694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:19:30.743  [2024-12-09 17:08:53.632714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:19:30.743  [2024-12-09 17:08:53.632727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:19:30.743  [2024-12-09 17:08:53.632732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:19:30.743  [2024-12-09 17:08:53.632740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:19:30.743  [2024-12-09 17:08:53.632745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:19:30.743  [2024-12-09 17:08:53.632754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:19:30.743  [2024-12-09 17:08:53.632760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:19:30.743  [2024-12-09 17:08:53.632774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:19:30.743  [2024-12-09 17:08:53.632792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:19:30.743  [2024-12-09 17:08:53.632811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:19:30.743  [2024-12-09 17:08:53.632829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:19:30.743  [2024-12-09 17:08:53.632857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:19:30.743  [2024-12-09 17:08:53.632864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:30.743  [2024-12-09 17:08:53.632869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:19:30.743  [2024-12-09 17:08:53.632877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:19:30.744  [2024-12-09 17:08:53.632883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:19:30.744  [2024-12-09 17:08:53.632890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:19:30.744  [2024-12-09 17:08:53.632895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:19:30.744  [2024-12-09 17:08:53.632903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:19:30.744  [2024-12-09 17:08:53.632909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:19:30.744  [2024-12-09 17:08:53.632917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:19:30.744  [2024-12-09 17:08:53.632922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:30.744  [2024-12-09 17:08:53.632929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:19:30.744  [2024-12-09 17:08:53.632934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:19:30.744  [2024-12-09 17:08:53.632941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:30.744  [2024-12-09 17:08:53.632946] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:19:30.744  [2024-12-09 17:08:53.632953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:19:30.744  [2024-12-09 17:08:53.632959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:19:30.744  [2024-12-09 17:08:53.632967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:30.744  [2024-12-09 17:08:53.632973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:19:30.744  [2024-12-09 17:08:53.632982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:19:30.744  [2024-12-09 17:08:53.632988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:19:30.744  [2024-12-09 17:08:53.632995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:19:30.744  [2024-12-09 17:08:53.633001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:19:30.744  [2024-12-09 17:08:53.633007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:19:30.744  [2024-12-09 17:08:53.633014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:19:30.744  [2024-12-09 17:08:53.633023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:19:30.744  [2024-12-09 17:08:53.633041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:19:30.744  [2024-12-09 17:08:53.633046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:19:30.744  [2024-12-09 17:08:53.633053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:19:30.744  [2024-12-09 17:08:53.633059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:19:30.744  [2024-12-09 17:08:53.633066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:19:30.744  [2024-12-09 17:08:53.633072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:19:30.744  [2024-12-09 17:08:53.633079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:19:30.744  [2024-12-09 17:08:53.633084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:19:30.744  [2024-12-09 17:08:53.633094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:19:30.744  [2024-12-09 17:08:53.633125] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:19:30.744  [2024-12-09 17:08:53.633135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:19:30.744  [2024-12-09 17:08:53.633149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:19:30.744  [2024-12-09 17:08:53.633154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:19:30.744  [2024-12-09 17:08:53.633161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:19:30.744  [2024-12-09 17:08:53.633166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:30.744  [2024-12-09 17:08:53.633174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:19:30.744  [2024-12-09 17:08:53.633179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.664 ms
00:19:30.744  [2024-12-09 17:08:53.633192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:30.744  [2024-12-09 17:08:53.633248] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:19:30.744  [2024-12-09 17:08:53.633260] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:19:34.108  [2024-12-09 17:08:56.879475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.879547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:19:34.108  [2024-12-09 17:08:56.879562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3246.212 ms
00:19:34.108  [2024-12-09 17:08:56.879573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.907985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.908030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:19:34.108  [2024-12-09 17:08:56.908043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.183 ms
00:19:34.108  [2024-12-09 17:08:56.908053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.908217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.908231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:19:34.108  [2024-12-09 17:08:56.908253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.068 ms
00:19:34.108  [2024-12-09 17:08:56.908267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.962127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.962171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:19:34.108  [2024-12-09 17:08:56.962184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 53.825 ms
00:19:34.108  [2024-12-09 17:08:56.962194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.962269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.962282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:19:34.108  [2024-12-09 17:08:56.962292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:19:34.108  [2024-12-09 17:08:56.962302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.962711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.962739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:19:34.108  [2024-12-09 17:08:56.962749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.378 ms
00:19:34.108  [2024-12-09 17:08:56.962758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.962887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.962899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:19:34.108  [2024-12-09 17:08:56.962920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.098 ms
00:19:34.108  [2024-12-09 17:08:56.962932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.978855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:56.978887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:19:34.108  [2024-12-09 17:08:56.978897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.883 ms
00:19:34.108  [2024-12-09 17:08:56.978907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:56.991460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:19:34.108  [2024-12-09 17:08:57.008736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:57.008769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:19:34.108  [2024-12-09 17:08:57.008781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.731 ms
00:19:34.108  [2024-12-09 17:08:57.008790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:57.097630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:57.097674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:19:34.108  [2024-12-09 17:08:57.097689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 88.747 ms
00:19:34.108  [2024-12-09 17:08:57.097697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:57.097931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:57.097944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:19:34.108  [2024-12-09 17:08:57.097958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.161 ms
00:19:34.108  [2024-12-09 17:08:57.097966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:57.121691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:57.121722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:19:34.108  [2024-12-09 17:08:57.121913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.694 ms
00:19:34.108  [2024-12-09 17:08:57.121922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:57.145022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:57.145047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:19:34.108  [2024-12-09 17:08:57.145061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.037 ms
00:19:34.108  [2024-12-09 17:08:57.145069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.108  [2024-12-09 17:08:57.145663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.108  [2024-12-09 17:08:57.145686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:19:34.109  [2024-12-09 17:08:57.145697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.538 ms
00:19:34.109  [2024-12-09 17:08:57.145705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.369  [2024-12-09 17:08:57.219268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.369  [2024-12-09 17:08:57.219303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:19:34.369  [2024-12-09 17:08:57.219319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 73.532 ms
00:19:34.369  [2024-12-09 17:08:57.219328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.369  [2024-12-09 17:08:57.244821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.369  [2024-12-09 17:08:57.244862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:19:34.369  [2024-12-09 17:08:57.244876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.380 ms
00:19:34.369  [2024-12-09 17:08:57.244884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.369  [2024-12-09 17:08:57.267707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.369  [2024-12-09 17:08:57.267739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:19:34.369  [2024-12-09 17:08:57.267751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.763 ms
00:19:34.369  [2024-12-09 17:08:57.267758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.369  [2024-12-09 17:08:57.290599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.370  [2024-12-09 17:08:57.290641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:19:34.370  [2024-12-09 17:08:57.290654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.769 ms
00:19:34.370  [2024-12-09 17:08:57.290662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.370  [2024-12-09 17:08:57.290723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.370  [2024-12-09 17:08:57.290736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:19:34.370  [2024-12-09 17:08:57.290749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:19:34.370  [2024-12-09 17:08:57.290758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.370  [2024-12-09 17:08:57.290835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:34.370  [2024-12-09 17:08:57.290856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:19:34.370  [2024-12-09 17:08:57.290867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.035 ms
00:19:34.370  [2024-12-09 17:08:57.290875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:34.370  [2024-12-09 17:08:57.291715] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:19:34.370  [2024-12-09 17:08:57.294724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3674.415 ms, result 0
00:19:34.370  [2024-12-09 17:08:57.295766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:19:34.370  {
00:19:34.370    "name": "ftl0",
00:19:34.370    "uuid": "fc2373b3-a810-4875-b935-5ccc0d51d98c"
00:19:34.370  }
00:19:34.370   17:08:57 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0
00:19:34.370   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0
00:19:34.370   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:19:34.370   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i
00:19:34.370   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:19:34.370   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:19:34.370   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:19:34.630   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000
00:19:34.890  [
00:19:34.890    {
00:19:34.890      "name": "ftl0",
00:19:34.890      "aliases": [
00:19:34.890        "fc2373b3-a810-4875-b935-5ccc0d51d98c"
00:19:34.890      ],
00:19:34.890      "product_name": "FTL disk",
00:19:34.890      "block_size": 4096,
00:19:34.890      "num_blocks": 23592960,
00:19:34.890      "uuid": "fc2373b3-a810-4875-b935-5ccc0d51d98c",
00:19:34.890      "assigned_rate_limits": {
00:19:34.890        "rw_ios_per_sec": 0,
00:19:34.890        "rw_mbytes_per_sec": 0,
00:19:34.890        "r_mbytes_per_sec": 0,
00:19:34.890        "w_mbytes_per_sec": 0
00:19:34.890      },
00:19:34.890      "claimed": false,
00:19:34.890      "zoned": false,
00:19:34.890      "supported_io_types": {
00:19:34.890        "read": true,
00:19:34.890        "write": true,
00:19:34.890        "unmap": true,
00:19:34.890        "flush": true,
00:19:34.890        "reset": false,
00:19:34.890        "nvme_admin": false,
00:19:34.890        "nvme_io": false,
00:19:34.890        "nvme_io_md": false,
00:19:34.890        "write_zeroes": true,
00:19:34.891        "zcopy": false,
00:19:34.891        "get_zone_info": false,
00:19:34.891        "zone_management": false,
00:19:34.891        "zone_append": false,
00:19:34.891        "compare": false,
00:19:34.891        "compare_and_write": false,
00:19:34.891        "abort": false,
00:19:34.891        "seek_hole": false,
00:19:34.891        "seek_data": false,
00:19:34.891        "copy": false,
00:19:34.891        "nvme_iov_md": false
00:19:34.891      },
00:19:34.891      "driver_specific": {
00:19:34.891        "ftl": {
00:19:34.891          "base_bdev": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:34.891          "cache": "nvc0n1p0"
00:19:34.891        }
00:19:34.891      }
00:19:34.891    }
00:19:34.891  ]
00:19:34.891   17:08:57 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0
00:19:34.891   17:08:57 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": ['
00:19:34.891   17:08:57 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:19:34.891   17:08:57 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}'
00:19:34.891    17:08:57 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0
00:19:35.152   17:08:58 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[
00:19:35.152    {
00:19:35.152      "name": "ftl0",
00:19:35.152      "aliases": [
00:19:35.152        "fc2373b3-a810-4875-b935-5ccc0d51d98c"
00:19:35.152      ],
00:19:35.152      "product_name": "FTL disk",
00:19:35.152      "block_size": 4096,
00:19:35.152      "num_blocks": 23592960,
00:19:35.152      "uuid": "fc2373b3-a810-4875-b935-5ccc0d51d98c",
00:19:35.152      "assigned_rate_limits": {
00:19:35.152        "rw_ios_per_sec": 0,
00:19:35.152        "rw_mbytes_per_sec": 0,
00:19:35.152        "r_mbytes_per_sec": 0,
00:19:35.152        "w_mbytes_per_sec": 0
00:19:35.152      },
00:19:35.152      "claimed": false,
00:19:35.152      "zoned": false,
00:19:35.152      "supported_io_types": {
00:19:35.152        "read": true,
00:19:35.152        "write": true,
00:19:35.152        "unmap": true,
00:19:35.152        "flush": true,
00:19:35.152        "reset": false,
00:19:35.152        "nvme_admin": false,
00:19:35.152        "nvme_io": false,
00:19:35.152        "nvme_io_md": false,
00:19:35.152        "write_zeroes": true,
00:19:35.152        "zcopy": false,
00:19:35.152        "get_zone_info": false,
00:19:35.152        "zone_management": false,
00:19:35.152        "zone_append": false,
00:19:35.152        "compare": false,
00:19:35.152        "compare_and_write": false,
00:19:35.152        "abort": false,
00:19:35.152        "seek_hole": false,
00:19:35.152        "seek_data": false,
00:19:35.152        "copy": false,
00:19:35.152        "nvme_iov_md": false
00:19:35.152      },
00:19:35.152      "driver_specific": {
00:19:35.152        "ftl": {
00:19:35.152          "base_bdev": "e7dc1c2f-adfa-4653-bcaf-12a70d110a43",
00:19:35.152          "cache": "nvc0n1p0"
00:19:35.152        }
00:19:35.152      }
00:19:35.152    }
00:19:35.152  ]'
00:19:35.152    17:08:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks'
00:19:35.152   17:08:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960
00:19:35.152   17:08:58 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:19:35.413  [2024-12-09 17:08:58.374932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.374973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:19:35.413  [2024-12-09 17:08:58.374990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:19:35.413  [2024-12-09 17:08:58.375004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.413  [2024-12-09 17:08:58.375038] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:19:35.413  [2024-12-09 17:08:58.377855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.377884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:19:35.413  [2024-12-09 17:08:58.377902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.790 ms
00:19:35.413  [2024-12-09 17:08:58.377911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.413  [2024-12-09 17:08:58.378375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.378395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:19:35.413  [2024-12-09 17:08:58.378407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.417 ms
00:19:35.413  [2024-12-09 17:08:58.378415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.413  [2024-12-09 17:08:58.382059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.382083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:19:35.413  [2024-12-09 17:08:58.382094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.621 ms
00:19:35.413  [2024-12-09 17:08:58.382104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.413  [2024-12-09 17:08:58.389169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.389197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:19:35.413  [2024-12-09 17:08:58.389209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.022 ms
00:19:35.413  [2024-12-09 17:08:58.389217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.413  [2024-12-09 17:08:58.413283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.413315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:19:35.413  [2024-12-09 17:08:58.413331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.986 ms
00:19:35.413  [2024-12-09 17:08:58.413338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.413  [2024-12-09 17:08:58.429445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.413  [2024-12-09 17:08:58.429477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:19:35.413  [2024-12-09 17:08:58.429491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.049 ms
00:19:35.414  [2024-12-09 17:08:58.429501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.414  [2024-12-09 17:08:58.429692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.414  [2024-12-09 17:08:58.429704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:19:35.414  [2024-12-09 17:08:58.429714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.126 ms
00:19:35.414  [2024-12-09 17:08:58.429722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.675  [2024-12-09 17:08:58.452935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.675  [2024-12-09 17:08:58.452964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:19:35.675  [2024-12-09 17:08:58.452976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.187 ms
00:19:35.675  [2024-12-09 17:08:58.452983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.675  [2024-12-09 17:08:58.476093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.675  [2024-12-09 17:08:58.476123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:19:35.675  [2024-12-09 17:08:58.476139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.057 ms
00:19:35.675  [2024-12-09 17:08:58.476146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.675  [2024-12-09 17:08:58.496489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.675  [2024-12-09 17:08:58.496524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:19:35.675  [2024-12-09 17:08:58.496535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.290 ms
00:19:35.675  [2024-12-09 17:08:58.496540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.675  [2024-12-09 17:08:58.514317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.675  [2024-12-09 17:08:58.514341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:19:35.675  [2024-12-09 17:08:58.514350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.699 ms
00:19:35.675  [2024-12-09 17:08:58.514356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.675  [2024-12-09 17:08:58.514403] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:19:35.675  [2024-12-09 17:08:58.514415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.675  [2024-12-09 17:08:58.514898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.514994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:19:35.676  [2024-12-09 17:08:58.515137] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:19:35.676  [2024-12-09 17:08:58.515146] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:19:35.676  [2024-12-09 17:08:58.515153] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:19:35.676  [2024-12-09 17:08:58.515161] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:19:35.676  [2024-12-09 17:08:58.515166] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:19:35.676  [2024-12-09 17:08:58.515175] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:19:35.676  [2024-12-09 17:08:58.515181] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:19:35.676  [2024-12-09 17:08:58.515188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:19:35.676  [2024-12-09 17:08:58.515194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:19:35.676  [2024-12-09 17:08:58.515201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:19:35.676  [2024-12-09 17:08:58.515206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:19:35.676  [2024-12-09 17:08:58.515213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.676  [2024-12-09 17:08:58.515219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:19:35.676  [2024-12-09 17:08:58.515227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.811 ms
00:19:35.676  [2024-12-09 17:08:58.515233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.525456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.676  [2024-12-09 17:08:58.525482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:19:35.676  [2024-12-09 17:08:58.525493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.203 ms
00:19:35.676  [2024-12-09 17:08:58.525500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.525827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:35.676  [2024-12-09 17:08:58.525841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:19:35.676  [2024-12-09 17:08:58.525866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.271 ms
00:19:35.676  [2024-12-09 17:08:58.525873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.562057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.562084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:19:35.676  [2024-12-09 17:08:58.562094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.562100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.562179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.562186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:19:35.676  [2024-12-09 17:08:58.562195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.562200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.562250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.562259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:19:35.676  [2024-12-09 17:08:58.562270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.562277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.562299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.562306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:19:35.676  [2024-12-09 17:08:58.562314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.562320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.627804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.627855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:19:35.676  [2024-12-09 17:08:58.627867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.627875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.678437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.678477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:19:35.676  [2024-12-09 17:08:58.678488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.678495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.678580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.678589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:19:35.676  [2024-12-09 17:08:58.678600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.678609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.678649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.678658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:19:35.676  [2024-12-09 17:08:58.678666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.678672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.678762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.678771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:19:35.676  [2024-12-09 17:08:58.678779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.678787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.676  [2024-12-09 17:08:58.678832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.676  [2024-12-09 17:08:58.678840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:19:35.676  [2024-12-09 17:08:58.678866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.676  [2024-12-09 17:08:58.678872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.677  [2024-12-09 17:08:58.678919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.677  [2024-12-09 17:08:58.678927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:19:35.677  [2024-12-09 17:08:58.678936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.677  [2024-12-09 17:08:58.678943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.677  [2024-12-09 17:08:58.678997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:35.677  [2024-12-09 17:08:58.679006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:19:35.677  [2024-12-09 17:08:58.679014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:35.677  [2024-12-09 17:08:58.679020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:35.677  [2024-12-09 17:08:58.679175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 304.228 ms, result 0
00:19:35.677  true
00:19:35.677   17:08:58 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 77725
00:19:35.677   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77725 ']'
00:19:35.677   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77725
00:19:35.677    17:08:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:19:35.677   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:35.677    17:08:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77725
00:19:35.937  killing process with pid 77725
00:19:35.937   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:35.937   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:35.937   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77725'
00:19:35.937   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77725
00:19:35.937   17:08:58 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77725
00:19:42.518   17:09:04 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536
00:19:42.518  65536+0 records in
00:19:42.518  65536+0 records out
00:19:42.518  268435456 bytes (268 MB, 256 MiB) copied, 0.816361 s, 329 MB/s
00:19:42.518   17:09:05 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:19:42.518  [2024-12-09 17:09:05.249717] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:19:42.518  [2024-12-09 17:09:05.249824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77908 ]
00:19:42.518  [2024-12-09 17:09:05.401889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:42.518  [2024-12-09 17:09:05.493681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:42.779  [2024-12-09 17:09:05.725953] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:19:42.779  [2024-12-09 17:09:05.726011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:19:43.042  [2024-12-09 17:09:05.883555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.883696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:19:43.042  [2024-12-09 17:09:05.883713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:19:43.042  [2024-12-09 17:09:05.883720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.886132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.886167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:19:43.042  [2024-12-09 17:09:05.886177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.393 ms
00:19:43.042  [2024-12-09 17:09:05.886184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.886264] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:19:43.042  [2024-12-09 17:09:05.886890] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:19:43.042  [2024-12-09 17:09:05.886909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.886916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:19:43.042  [2024-12-09 17:09:05.886923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.653 ms
00:19:43.042  [2024-12-09 17:09:05.886929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.888242] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:19:43.042  [2024-12-09 17:09:05.899108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.899141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:19:43.042  [2024-12-09 17:09:05.899154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.868 ms
00:19:43.042  [2024-12-09 17:09:05.899165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.899244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.899254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:19:43.042  [2024-12-09 17:09:05.899261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:19:43.042  [2024-12-09 17:09:05.899266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.905417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.905439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:19:43.042  [2024-12-09 17:09:05.905446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.120 ms
00:19:43.042  [2024-12-09 17:09:05.905453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.905525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.905533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:19:43.042  [2024-12-09 17:09:05.905539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:19:43.042  [2024-12-09 17:09:05.905547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.905566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.905573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:19:43.042  [2024-12-09 17:09:05.905579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:19:43.042  [2024-12-09 17:09:05.905585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.905604] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:19:43.042  [2024-12-09 17:09:05.908585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.908607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:19:43.042  [2024-12-09 17:09:05.908615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.985 ms
00:19:43.042  [2024-12-09 17:09:05.908621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.908654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.908661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:19:43.042  [2024-12-09 17:09:05.908668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:19:43.042  [2024-12-09 17:09:05.908674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.908691] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:19:43.042  [2024-12-09 17:09:05.908710] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:19:43.042  [2024-12-09 17:09:05.908737] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:19:43.042  [2024-12-09 17:09:05.908748] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:19:43.042  [2024-12-09 17:09:05.908832] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:19:43.042  [2024-12-09 17:09:05.908840] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:19:43.042  [2024-12-09 17:09:05.908864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:19:43.042  [2024-12-09 17:09:05.908875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:19:43.042  [2024-12-09 17:09:05.908883] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:19:43.042  [2024-12-09 17:09:05.908889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:19:43.042  [2024-12-09 17:09:05.908896] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:19:43.042  [2024-12-09 17:09:05.908902] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:19:43.042  [2024-12-09 17:09:05.908908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:19:43.042  [2024-12-09 17:09:05.908915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.908921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:19:43.042  [2024-12-09 17:09:05.908927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.226 ms
00:19:43.042  [2024-12-09 17:09:05.908933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.909000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.042  [2024-12-09 17:09:05.909010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:19:43.042  [2024-12-09 17:09:05.909016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:19:43.042  [2024-12-09 17:09:05.909022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.042  [2024-12-09 17:09:05.909099] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:19:43.042  [2024-12-09 17:09:05.909108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:19:43.042  [2024-12-09 17:09:05.909114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:19:43.042  [2024-12-09 17:09:05.909121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:19:43.042  [2024-12-09 17:09:05.909133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:19:43.042  [2024-12-09 17:09:05.909145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:19:43.042  [2024-12-09 17:09:05.909152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:19:43.042  [2024-12-09 17:09:05.909163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:19:43.042  [2024-12-09 17:09:05.909174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:19:43.042  [2024-12-09 17:09:05.909179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:19:43.042  [2024-12-09 17:09:05.909185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:19:43.042  [2024-12-09 17:09:05.909190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:19:43.042  [2024-12-09 17:09:05.909195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:19:43.042  [2024-12-09 17:09:05.909207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:19:43.042  [2024-12-09 17:09:05.909212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:19:43.042  [2024-12-09 17:09:05.909223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:43.042  [2024-12-09 17:09:05.909233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:19:43.042  [2024-12-09 17:09:05.909238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:19:43.042  [2024-12-09 17:09:05.909243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:43.042  [2024-12-09 17:09:05.909248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:19:43.043  [2024-12-09 17:09:05.909254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:19:43.043  [2024-12-09 17:09:05.909259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:43.043  [2024-12-09 17:09:05.909264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:19:43.043  [2024-12-09 17:09:05.909269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:19:43.043  [2024-12-09 17:09:05.909274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:19:43.043  [2024-12-09 17:09:05.909279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:19:43.043  [2024-12-09 17:09:05.909284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:19:43.043  [2024-12-09 17:09:05.909289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:19:43.043  [2024-12-09 17:09:05.909294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:19:43.043  [2024-12-09 17:09:05.909299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:19:43.043  [2024-12-09 17:09:05.909304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:19:43.043  [2024-12-09 17:09:05.909309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:19:43.043  [2024-12-09 17:09:05.909313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:19:43.043  [2024-12-09 17:09:05.909321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:43.043  [2024-12-09 17:09:05.909326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:19:43.043  [2024-12-09 17:09:05.909331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:19:43.043  [2024-12-09 17:09:05.909336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:43.043  [2024-12-09 17:09:05.909342] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:19:43.043  [2024-12-09 17:09:05.909348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:19:43.043  [2024-12-09 17:09:05.909355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:19:43.043  [2024-12-09 17:09:05.909361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:19:43.043  [2024-12-09 17:09:05.909367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:19:43.043  [2024-12-09 17:09:05.909372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:19:43.043  [2024-12-09 17:09:05.909377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:19:43.043  [2024-12-09 17:09:05.909382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:19:43.043  [2024-12-09 17:09:05.909388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:19:43.043  [2024-12-09 17:09:05.909394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:19:43.043  [2024-12-09 17:09:05.909400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:19:43.043  [2024-12-09 17:09:05.909408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:19:43.043  [2024-12-09 17:09:05.909420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:19:43.043  [2024-12-09 17:09:05.909426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:19:43.043  [2024-12-09 17:09:05.909431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:19:43.043  [2024-12-09 17:09:05.909437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:19:43.043  [2024-12-09 17:09:05.909443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:19:43.043  [2024-12-09 17:09:05.909449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:19:43.043  [2024-12-09 17:09:05.909455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:19:43.043  [2024-12-09 17:09:05.909461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:19:43.043  [2024-12-09 17:09:05.909466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:19:43.043  [2024-12-09 17:09:05.909493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:19:43.043  [2024-12-09 17:09:05.909499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:19:43.043  [2024-12-09 17:09:05.909513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:19:43.043  [2024-12-09 17:09:05.909519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:19:43.043  [2024-12-09 17:09:05.909525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:19:43.043  [2024-12-09 17:09:05.909530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.909539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:19:43.043  [2024-12-09 17:09:05.909545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.484 ms
00:19:43.043  [2024-12-09 17:09:05.909550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.933812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.933839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:19:43.043  [2024-12-09 17:09:05.933855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.199 ms
00:19:43.043  [2024-12-09 17:09:05.933862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.933961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.933969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:19:43.043  [2024-12-09 17:09:05.933976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:19:43.043  [2024-12-09 17:09:05.933982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.977805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.977837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:19:43.043  [2024-12-09 17:09:05.977865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 43.806 ms
00:19:43.043  [2024-12-09 17:09:05.977872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.977948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.977958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:19:43.043  [2024-12-09 17:09:05.977965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:19:43.043  [2024-12-09 17:09:05.977971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.978355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.978367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:19:43.043  [2024-12-09 17:09:05.978380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.368 ms
00:19:43.043  [2024-12-09 17:09:05.978387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.978502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.978510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:19:43.043  [2024-12-09 17:09:05.978517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.095 ms
00:19:43.043  [2024-12-09 17:09:05.978524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:05.990893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:05.991016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:19:43.043  [2024-12-09 17:09:05.991031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.351 ms
00:19:43.043  [2024-12-09 17:09:05.991037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:06.001608] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4
00:19:43.043  [2024-12-09 17:09:06.001637] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:19:43.043  [2024-12-09 17:09:06.001647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:06.001654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:19:43.043  [2024-12-09 17:09:06.001661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.533 ms
00:19:43.043  [2024-12-09 17:09:06.001667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:06.020340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:06.020368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:19:43.043  [2024-12-09 17:09:06.020378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.617 ms
00:19:43.043  [2024-12-09 17:09:06.020385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:06.029801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:06.029825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:19:43.043  [2024-12-09 17:09:06.029834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.361 ms
00:19:43.043  [2024-12-09 17:09:06.029840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:06.038711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:06.038736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:19:43.043  [2024-12-09 17:09:06.038743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.808 ms
00:19:43.043  [2024-12-09 17:09:06.038749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.043  [2024-12-09 17:09:06.039235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.043  [2024-12-09 17:09:06.039252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:19:43.043  [2024-12-09 17:09:06.039260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.423 ms
00:19:43.043  [2024-12-09 17:09:06.039267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.086535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.086578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:19:43.306  [2024-12-09 17:09:06.086590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.247 ms
00:19:43.306  [2024-12-09 17:09:06.086598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.094577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:19:43.306  [2024-12-09 17:09:06.108790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.108822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:19:43.306  [2024-12-09 17:09:06.108833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.109 ms
00:19:43.306  [2024-12-09 17:09:06.108843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.108930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.108939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:19:43.306  [2024-12-09 17:09:06.108946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:19:43.306  [2024-12-09 17:09:06.108953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.108998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.109005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:19:43.306  [2024-12-09 17:09:06.109012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:19:43.306  [2024-12-09 17:09:06.109022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.109048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.109055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:19:43.306  [2024-12-09 17:09:06.109063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:19:43.306  [2024-12-09 17:09:06.109069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.109097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:19:43.306  [2024-12-09 17:09:06.109105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.109111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:19:43.306  [2024-12-09 17:09:06.109118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:19:43.306  [2024-12-09 17:09:06.109125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.127430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.127456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:19:43.306  [2024-12-09 17:09:06.127466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.288 ms
00:19:43.306  [2024-12-09 17:09:06.127474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.127548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:43.306  [2024-12-09 17:09:06.127557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:19:43.306  [2024-12-09 17:09:06.127564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:19:43.306  [2024-12-09 17:09:06.127570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:43.306  [2024-12-09 17:09:06.128641] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:19:43.306  [2024-12-09 17:09:06.131018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 244.814 ms, result 0
00:19:43.306  [2024-12-09 17:09:06.131729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:19:43.306  [2024-12-09 17:09:06.142499] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:19:44.248  
[2024-12-09T17:09:08.231Z] Copying: 22/256 [MB] (22 MBps)
[2024-12-09T17:09:09.172Z] Copying: 38/256 [MB] (16 MBps)
[2024-12-09T17:09:10.559Z] Copying: 49/256 [MB] (11 MBps)
[2024-12-09T17:09:11.504Z] Copying: 60/256 [MB] (10 MBps)
[2024-12-09T17:09:12.519Z] Copying: 70/256 [MB] (10 MBps)
[2024-12-09T17:09:13.464Z] Copying: 85/256 [MB] (14 MBps)
[2024-12-09T17:09:14.408Z] Copying: 97/256 [MB] (12 MBps)
[2024-12-09T17:09:15.348Z] Copying: 111/256 [MB] (13 MBps)
[2024-12-09T17:09:16.290Z] Copying: 126/256 [MB] (14 MBps)
[2024-12-09T17:09:17.235Z] Copying: 142/256 [MB] (16 MBps)
[2024-12-09T17:09:18.181Z] Copying: 159/256 [MB] (17 MBps)
[2024-12-09T17:09:19.559Z] Copying: 173404/262144 [kB] (9880 kBps)
[2024-12-09T17:09:20.503Z] Copying: 205/256 [MB] (36 MBps)
[2024-12-09T17:09:21.449Z] Copying: 224/256 [MB] (18 MBps)
[2024-12-09T17:09:22.394Z] Copying: 238/256 [MB] (14 MBps)
[2024-12-09T17:09:22.656Z] Copying: 252/256 [MB] (14 MBps)
[2024-12-09T17:09:22.656Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-09 17:09:22.499604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:19:59.615  [2024-12-09 17:09:22.509639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.509678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:19:59.615  [2024-12-09 17:09:22.509692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:19:59.615  [2024-12-09 17:09:22.509706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.509728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:19:59.615  [2024-12-09 17:09:22.512592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.512623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:19:59.615  [2024-12-09 17:09:22.512634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.850 ms
00:19:59.615  [2024-12-09 17:09:22.512643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.515354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.515485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:19:59.615  [2024-12-09 17:09:22.515502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.687 ms
00:19:59.615  [2024-12-09 17:09:22.515510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.523594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.523635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:19:59.615  [2024-12-09 17:09:22.523645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.064 ms
00:19:59.615  [2024-12-09 17:09:22.523653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.530836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.530874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:19:59.615  [2024-12-09 17:09:22.530885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.137 ms
00:19:59.615  [2024-12-09 17:09:22.530895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.555407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.555445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:19:59.615  [2024-12-09 17:09:22.555457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.466 ms
00:19:59.615  [2024-12-09 17:09:22.555465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.570709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.570867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:19:59.615  [2024-12-09 17:09:22.570891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.205 ms
00:19:59.615  [2024-12-09 17:09:22.570899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.571289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.571321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:19:59.615  [2024-12-09 17:09:22.571333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.098 ms
00:19:59.615  [2024-12-09 17:09:22.571350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.596470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.596512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:19:59.615  [2024-12-09 17:09:22.596524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.103 ms
00:19:59.615  [2024-12-09 17:09:22.596531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.615  [2024-12-09 17:09:22.620418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.615  [2024-12-09 17:09:22.620453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:19:59.615  [2024-12-09 17:09:22.620464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.813 ms
00:19:59.615  [2024-12-09 17:09:22.620471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.616  [2024-12-09 17:09:22.643937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.616  [2024-12-09 17:09:22.643969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:19:59.616  [2024-12-09 17:09:22.643980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.420 ms
00:19:59.616  [2024-12-09 17:09:22.643987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.877  [2024-12-09 17:09:22.667592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.877  [2024-12-09 17:09:22.667625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:19:59.877  [2024-12-09 17:09:22.667635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.540 ms
00:19:59.877  [2024-12-09 17:09:22.667643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.877  [2024-12-09 17:09:22.667680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:19:59.877  [2024-12-09 17:09:22.667696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.877  [2024-12-09 17:09:22.667884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.667996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:19:59.878  [2024-12-09 17:09:22.668513] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:19:59.878  [2024-12-09 17:09:22.668521] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:19:59.878  [2024-12-09 17:09:22.668529] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:19:59.878  [2024-12-09 17:09:22.668537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:19:59.878  [2024-12-09 17:09:22.668546] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:19:59.878  [2024-12-09 17:09:22.668554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:19:59.878  [2024-12-09 17:09:22.668562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:19:59.878  [2024-12-09 17:09:22.668570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:19:59.878  [2024-12-09 17:09:22.668580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:19:59.878  [2024-12-09 17:09:22.668586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:19:59.878  [2024-12-09 17:09:22.668593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:19:59.878  [2024-12-09 17:09:22.668600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.878  [2024-12-09 17:09:22.668608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:19:59.878  [2024-12-09 17:09:22.668617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.920 ms
00:19:59.878  [2024-12-09 17:09:22.668624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.878  [2024-12-09 17:09:22.681960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.878  [2024-12-09 17:09:22.682104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:19:59.878  [2024-12-09 17:09:22.682120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.307 ms
00:19:59.878  [2024-12-09 17:09:22.682128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.682522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:19:59.879  [2024-12-09 17:09:22.682539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:19:59.879  [2024-12-09 17:09:22.682548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.354 ms
00:19:59.879  [2024-12-09 17:09:22.682556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.720771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.720808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:19:59.879  [2024-12-09 17:09:22.720819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.720832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.720939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.720963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:19:59.879  [2024-12-09 17:09:22.720972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.720980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.721029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.721040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:19:59.879  [2024-12-09 17:09:22.721049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.721056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.721077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.721085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:19:59.879  [2024-12-09 17:09:22.721092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.721100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.805349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.805409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:19:59.879  [2024-12-09 17:09:22.805423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.805432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.877818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.877908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:19:59.879  [2024-12-09 17:09:22.877923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.877933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.878046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:19:59.879  [2024-12-09 17:09:22.878056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.878066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.878121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:19:59.879  [2024-12-09 17:09:22.878130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.878139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.878260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:19:59.879  [2024-12-09 17:09:22.878269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.878279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.878332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:19:59.879  [2024-12-09 17:09:22.878344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.878354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.878422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:19:59.879  [2024-12-09 17:09:22.878431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.878440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:19:59.879  [2024-12-09 17:09:22.878518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:19:59.879  [2024-12-09 17:09:22.878526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:19:59.879  [2024-12-09 17:09:22.878535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:19:59.879  [2024-12-09 17:09:22.878726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.056 ms, result 0
00:20:00.822  
00:20:00.822  
00:20:00.822   17:09:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78099
00:20:00.822   17:09:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78099
00:20:00.822   17:09:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78099 ']'
00:20:00.822   17:09:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init
00:20:00.822   17:09:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:00.822   17:09:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:00.822  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:00.822   17:09:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:00.822   17:09:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:00.822   17:09:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:20:01.082  [2024-12-09 17:09:23.934299] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:20:01.082  [2024-12-09 17:09:23.934426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78099 ]
00:20:01.082  [2024-12-09 17:09:24.092291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:01.343  [2024-12-09 17:09:24.206597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:01.916   17:09:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:01.916   17:09:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:20:01.917   17:09:24 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config
00:20:02.181  [2024-12-09 17:09:25.151476] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:02.181  [2024-12-09 17:09:25.151577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:02.445  [2024-12-09 17:09:25.335127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.335189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:20:02.445  [2024-12-09 17:09:25.335209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:20:02.445  [2024-12-09 17:09:25.335219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.338442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.338691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:02.445  [2024-12-09 17:09:25.338719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.201 ms
00:20:02.445  [2024-12-09 17:09:25.338728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.339293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:20:02.445  [2024-12-09 17:09:25.340180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:20:02.445  [2024-12-09 17:09:25.340227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.340237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:02.445  [2024-12-09 17:09:25.340251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.964 ms
00:20:02.445  [2024-12-09 17:09:25.340260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.342669] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:20:02.445  [2024-12-09 17:09:25.357876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.357929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:20:02.445  [2024-12-09 17:09:25.357944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.213 ms
00:20:02.445  [2024-12-09 17:09:25.357955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.358075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.358091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:20:02.445  [2024-12-09 17:09:25.358101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:20:02.445  [2024-12-09 17:09:25.358111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.369467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.369703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:02.445  [2024-12-09 17:09:25.369724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.299 ms
00:20:02.445  [2024-12-09 17:09:25.369736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.369911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.369927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:02.445  [2024-12-09 17:09:25.369940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.113 ms
00:20:02.445  [2024-12-09 17:09:25.369955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.369983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.369994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:20:02.445  [2024-12-09 17:09:25.370003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:20:02.445  [2024-12-09 17:09:25.370014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.370040] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:20:02.445  [2024-12-09 17:09:25.374535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.374568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:02.445  [2024-12-09 17:09:25.374582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.497 ms
00:20:02.445  [2024-12-09 17:09:25.374591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.374660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.445  [2024-12-09 17:09:25.374670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:20:02.445  [2024-12-09 17:09:25.374683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.015 ms
00:20:02.445  [2024-12-09 17:09:25.374694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.445  [2024-12-09 17:09:25.374718] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:20:02.445  [2024-12-09 17:09:25.374747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:20:02.445  [2024-12-09 17:09:25.374799] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:20:02.445  [2024-12-09 17:09:25.374816] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:20:02.445  [2024-12-09 17:09:25.374953] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:20:02.445  [2024-12-09 17:09:25.374966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:20:02.445  [2024-12-09 17:09:25.374983] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:20:02.445  [2024-12-09 17:09:25.374995] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:20:02.445  [2024-12-09 17:09:25.375008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:20:02.445  [2024-12-09 17:09:25.375017] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:20:02.445  [2024-12-09 17:09:25.375028] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:20:02.446  [2024-12-09 17:09:25.375036] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:20:02.446  [2024-12-09 17:09:25.375049] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:20:02.446  [2024-12-09 17:09:25.375060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.375072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:20:02.446  [2024-12-09 17:09:25.375080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.347 ms
00:20:02.446  [2024-12-09 17:09:25.375090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.446  [2024-12-09 17:09:25.375181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.375194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:20:02.446  [2024-12-09 17:09:25.375202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.070 ms
00:20:02.446  [2024-12-09 17:09:25.375211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.446  [2024-12-09 17:09:25.375313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:20:02.446  [2024-12-09 17:09:25.375327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:20:02.446  [2024-12-09 17:09:25.375336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:20:02.446  [2024-12-09 17:09:25.375366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:20:02.446  [2024-12-09 17:09:25.375398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:02.446  [2024-12-09 17:09:25.375414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:20:02.446  [2024-12-09 17:09:25.375424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:20:02.446  [2024-12-09 17:09:25.375434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:02.446  [2024-12-09 17:09:25.375445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:20:02.446  [2024-12-09 17:09:25.375452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:20:02.446  [2024-12-09 17:09:25.375461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:20:02.446  [2024-12-09 17:09:25.375477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:20:02.446  [2024-12-09 17:09:25.375524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:20:02.446  [2024-12-09 17:09:25.375551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:20:02.446  [2024-12-09 17:09:25.375575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:20:02.446  [2024-12-09 17:09:25.375602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:20:02.446  [2024-12-09 17:09:25.375625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:02.446  [2024-12-09 17:09:25.375641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:20:02.446  [2024-12-09 17:09:25.375650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:20:02.446  [2024-12-09 17:09:25.375656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:02.446  [2024-12-09 17:09:25.375665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:20:02.446  [2024-12-09 17:09:25.375672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:20:02.446  [2024-12-09 17:09:25.375684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:20:02.446  [2024-12-09 17:09:25.375701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:20:02.446  [2024-12-09 17:09:25.375709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375717] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:20:02.446  [2024-12-09 17:09:25.375730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:20:02.446  [2024-12-09 17:09:25.375741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:02.446  [2024-12-09 17:09:25.375760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:20:02.446  [2024-12-09 17:09:25.375768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:20:02.446  [2024-12-09 17:09:25.375777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:20:02.446  [2024-12-09 17:09:25.375784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:20:02.446  [2024-12-09 17:09:25.375793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:20:02.446  [2024-12-09 17:09:25.375800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:20:02.446  [2024-12-09 17:09:25.375811] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:20:02.446  [2024-12-09 17:09:25.375822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.375836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:20:02.446  [2024-12-09 17:09:25.375859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:20:02.446  [2024-12-09 17:09:25.375870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:20:02.446  [2024-12-09 17:09:25.375877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:20:02.446  [2024-12-09 17:09:25.375886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:20:02.446  [2024-12-09 17:09:25.375894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:20:02.446  [2024-12-09 17:09:25.375903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:20:02.446  [2024-12-09 17:09:25.375911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:20:02.446  [2024-12-09 17:09:25.375921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:20:02.446  [2024-12-09 17:09:25.375929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.375938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.375945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.375954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.375961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:20:02.446  [2024-12-09 17:09:25.375970] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:20:02.446  [2024-12-09 17:09:25.375979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.375994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:20:02.446  [2024-12-09 17:09:25.376002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:20:02.446  [2024-12-09 17:09:25.376012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:20:02.446  [2024-12-09 17:09:25.376018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:20:02.446  [2024-12-09 17:09:25.376028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.376038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:20:02.446  [2024-12-09 17:09:25.376049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.783 ms
00:20:02.446  [2024-12-09 17:09:25.376061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.446  [2024-12-09 17:09:25.414581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.414624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:02.446  [2024-12-09 17:09:25.414638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.455 ms
00:20:02.446  [2024-12-09 17:09:25.414650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.446  [2024-12-09 17:09:25.414797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.414808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:20:02.446  [2024-12-09 17:09:25.414819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:20:02.446  [2024-12-09 17:09:25.414829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.446  [2024-12-09 17:09:25.454132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.454175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:02.446  [2024-12-09 17:09:25.454189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.256 ms
00:20:02.446  [2024-12-09 17:09:25.454199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.446  [2024-12-09 17:09:25.454300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.446  [2024-12-09 17:09:25.454312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:02.447  [2024-12-09 17:09:25.454324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:20:02.447  [2024-12-09 17:09:25.454333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.447  [2024-12-09 17:09:25.455039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.447  [2024-12-09 17:09:25.455072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:02.447  [2024-12-09 17:09:25.455085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.677 ms
00:20:02.447  [2024-12-09 17:09:25.455096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.447  [2024-12-09 17:09:25.455267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.447  [2024-12-09 17:09:25.455279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:02.447  [2024-12-09 17:09:25.455290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.140 ms
00:20:02.447  [2024-12-09 17:09:25.455298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.447  [2024-12-09 17:09:25.476129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.447  [2024-12-09 17:09:25.476165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:02.447  [2024-12-09 17:09:25.476180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.803 ms
00:20:02.447  [2024-12-09 17:09:25.476188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.708  [2024-12-09 17:09:25.508699] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:20:02.708  [2024-12-09 17:09:25.508746] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:20:02.708  [2024-12-09 17:09:25.508766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.708  [2024-12-09 17:09:25.508777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:20:02.708  [2024-12-09 17:09:25.508790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.453 ms
00:20:02.708  [2024-12-09 17:09:25.508806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.708  [2024-12-09 17:09:25.535175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.708  [2024-12-09 17:09:25.535219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:20:02.708  [2024-12-09 17:09:25.535234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.236 ms
00:20:02.709  [2024-12-09 17:09:25.535244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.548081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.548121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:20:02.709  [2024-12-09 17:09:25.548138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.738 ms
00:20:02.709  [2024-12-09 17:09:25.548147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.560862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.560901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:20:02.709  [2024-12-09 17:09:25.560916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.628 ms
00:20:02.709  [2024-12-09 17:09:25.560924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.561608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.561637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:20:02.709  [2024-12-09 17:09:25.561650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.544 ms
00:20:02.709  [2024-12-09 17:09:25.561658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.632184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.632240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:20:02.709  [2024-12-09 17:09:25.632260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 70.493 ms
00:20:02.709  [2024-12-09 17:09:25.632270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.643646] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:20:02.709  [2024-12-09 17:09:25.667924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.667980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:20:02.709  [2024-12-09 17:09:25.667998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.554 ms
00:20:02.709  [2024-12-09 17:09:25.668010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.668141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.668157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:20:02.709  [2024-12-09 17:09:25.668169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.015 ms
00:20:02.709  [2024-12-09 17:09:25.668181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.668265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.668278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:20:02.709  [2024-12-09 17:09:25.668287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.061 ms
00:20:02.709  [2024-12-09 17:09:25.668301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.668330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.668345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:20:02.709  [2024-12-09 17:09:25.668354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:20:02.709  [2024-12-09 17:09:25.668365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.668404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:20:02.709  [2024-12-09 17:09:25.668420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.668432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:20:02.709  [2024-12-09 17:09:25.668443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:20:02.709  [2024-12-09 17:09:25.668451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.695269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.695314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:20:02.709  [2024-12-09 17:09:25.695331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.786 ms
00:20:02.709  [2024-12-09 17:09:25.695341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.695478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.709  [2024-12-09 17:09:25.695491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:20:02.709  [2024-12-09 17:09:25.695504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.043 ms
00:20:02.709  [2024-12-09 17:09:25.695517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.709  [2024-12-09 17:09:25.696894] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:02.709  [2024-12-09 17:09:25.700415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.333 ms, result 0
00:20:02.709  [2024-12-09 17:09:25.703190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:02.709  Some configs were skipped because the RPC state that can call them passed over.
00:20:02.709   17:09:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024
00:20:02.971  [2024-12-09 17:09:25.936053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:02.971  [2024-12-09 17:09:25.936112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:20:02.971  [2024-12-09 17:09:25.936127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.559 ms
00:20:02.971  [2024-12-09 17:09:25.936139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:02.971  [2024-12-09 17:09:25.936175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.685 ms, result 0
00:20:02.971  true
00:20:02.971   17:09:25 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024
00:20:03.232  [2024-12-09 17:09:26.140098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:03.232  [2024-12-09 17:09:26.140139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:20:03.233  [2024-12-09 17:09:26.140154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.373 ms
00:20:03.233  [2024-12-09 17:09:26.140162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:03.233  [2024-12-09 17:09:26.140203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.480 ms, result 0
00:20:03.233  true
00:20:03.233   17:09:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78099
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78099 ']'
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78099
00:20:03.233    17:09:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:03.233    17:09:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78099
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:03.233  killing process with pid 78099
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78099'
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78099
00:20:03.233   17:09:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78099
00:20:04.194  [2024-12-09 17:09:27.015398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.015483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:20:04.194  [2024-12-09 17:09:27.015501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:20:04.194  [2024-12-09 17:09:27.015512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.015539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:20:04.194  [2024-12-09 17:09:27.018864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.018908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:20:04.194  [2024-12-09 17:09:27.018926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.286 ms
00:20:04.194  [2024-12-09 17:09:27.018935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.019269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.019293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:20:04.194  [2024-12-09 17:09:27.019307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.281 ms
00:20:04.194  [2024-12-09 17:09:27.019317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.024219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.024260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:20:04.194  [2024-12-09 17:09:27.024276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.874 ms
00:20:04.194  [2024-12-09 17:09:27.024285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.031232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.031273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:20:04.194  [2024-12-09 17:09:27.031291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.895 ms
00:20:04.194  [2024-12-09 17:09:27.031300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.042565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.042614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:20:04.194  [2024-12-09 17:09:27.042631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.174 ms
00:20:04.194  [2024-12-09 17:09:27.042639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.052382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.052424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:20:04.194  [2024-12-09 17:09:27.052437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.670 ms
00:20:04.194  [2024-12-09 17:09:27.052446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.052614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.052628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:20:04.194  [2024-12-09 17:09:27.052640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.113 ms
00:20:04.194  [2024-12-09 17:09:27.052649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.064468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.064532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:20:04.194  [2024-12-09 17:09:27.064548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.792 ms
00:20:04.194  [2024-12-09 17:09:27.064556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.076034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.076071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:20:04.194  [2024-12-09 17:09:27.076090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.425 ms
00:20:04.194  [2024-12-09 17:09:27.076097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.086487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.086523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:20:04.194  [2024-12-09 17:09:27.086536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.337 ms
00:20:04.194  [2024-12-09 17:09:27.086543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.096280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.194  [2024-12-09 17:09:27.096317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:20:04.194  [2024-12-09 17:09:27.096330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.655 ms
00:20:04.194  [2024-12-09 17:09:27.096338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.194  [2024-12-09 17:09:27.096384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:20:04.194  [2024-12-09 17:09:27.096402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.194  [2024-12-09 17:09:27.096415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.096993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.195  [2024-12-09 17:09:27.097328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:20:04.196  [2024-12-09 17:09:27.097408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:20:04.196  [2024-12-09 17:09:27.097424] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:20:04.196  [2024-12-09 17:09:27.097438] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:20:04.196  [2024-12-09 17:09:27.097450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:20:04.196  [2024-12-09 17:09:27.097458] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:20:04.196  [2024-12-09 17:09:27.097469] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:20:04.196  [2024-12-09 17:09:27.097476] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:20:04.196  [2024-12-09 17:09:27.097487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:20:04.196  [2024-12-09 17:09:27.097496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:20:04.196  [2024-12-09 17:09:27.097505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:20:04.196  [2024-12-09 17:09:27.097512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:20:04.196  [2024-12-09 17:09:27.097521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.196  [2024-12-09 17:09:27.097530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:20:04.196  [2024-12-09 17:09:27.097540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.140 ms
00:20:04.196  [2024-12-09 17:09:27.097549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.196  [2024-12-09 17:09:27.112155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.196  [2024-12-09 17:09:27.112191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:20:04.196  [2024-12-09 17:09:27.112208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.564 ms
00:20:04.196  [2024-12-09 17:09:27.112216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.196  [2024-12-09 17:09:27.112711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:04.196  [2024-12-09 17:09:27.112733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:20:04.196  [2024-12-09 17:09:27.112748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.422 ms
00:20:04.196  [2024-12-09 17:09:27.112756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.196  [2024-12-09 17:09:27.165073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.196  [2024-12-09 17:09:27.165115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:04.196  [2024-12-09 17:09:27.165129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.196  [2024-12-09 17:09:27.165139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.196  [2024-12-09 17:09:27.165248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.196  [2024-12-09 17:09:27.165260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:04.196  [2024-12-09 17:09:27.165276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.196  [2024-12-09 17:09:27.165284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.196  [2024-12-09 17:09:27.165344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.196  [2024-12-09 17:09:27.165354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:04.196  [2024-12-09 17:09:27.165369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.196  [2024-12-09 17:09:27.165378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.196  [2024-12-09 17:09:27.165399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.196  [2024-12-09 17:09:27.165408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:04.196  [2024-12-09 17:09:27.165421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.196  [2024-12-09 17:09:27.165433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.258037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.258088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:04.457  [2024-12-09 17:09:27.258106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.258116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:04.457  [2024-12-09 17:09:27.333184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:04.457  [2024-12-09 17:09:27.333317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:04.457  [2024-12-09 17:09:27.333387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:04.457  [2024-12-09 17:09:27.333551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:20:04.457  [2024-12-09 17:09:27.333619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:04.457  [2024-12-09 17:09:27.333714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.333785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:04.457  [2024-12-09 17:09:27.333797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:04.457  [2024-12-09 17:09:27.333808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:04.457  [2024-12-09 17:09:27.333817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:04.457  [2024-12-09 17:09:27.334096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 318.661 ms, result 0
00:20:05.028   17:09:27 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data
00:20:05.028   17:09:27 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:20:05.028  [2024-12-09 17:09:27.983186] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:20:05.028  [2024-12-09 17:09:27.983309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78158 ]
00:20:05.289  [2024-12-09 17:09:28.138789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:05.289  [2024-12-09 17:09:28.227750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:05.549  [2024-12-09 17:09:28.459927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:05.549  [2024-12-09 17:09:28.459990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:05.811  [2024-12-09 17:09:28.613884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.613921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:20:05.811  [2024-12-09 17:09:28.613933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:20:05.811  [2024-12-09 17:09:28.613940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.616129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.616158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:05.811  [2024-12-09 17:09:28.616166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.177 ms
00:20:05.811  [2024-12-09 17:09:28.616172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.616232] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:20:05.811  [2024-12-09 17:09:28.616809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:20:05.811  [2024-12-09 17:09:28.616832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.616839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:05.811  [2024-12-09 17:09:28.616858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.607 ms
00:20:05.811  [2024-12-09 17:09:28.616865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.618164] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:20:05.811  [2024-12-09 17:09:28.628360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.628388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:20:05.811  [2024-12-09 17:09:28.628398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.197 ms
00:20:05.811  [2024-12-09 17:09:28.628404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.628475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.628485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:20:05.811  [2024-12-09 17:09:28.628492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:20:05.811  [2024-12-09 17:09:28.628497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.634687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.634714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:05.811  [2024-12-09 17:09:28.634721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.143 ms
00:20:05.811  [2024-12-09 17:09:28.634727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.634799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.634808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:05.811  [2024-12-09 17:09:28.634815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.046 ms
00:20:05.811  [2024-12-09 17:09:28.634821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.634839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.634855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:20:05.811  [2024-12-09 17:09:28.634861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:20:05.811  [2024-12-09 17:09:28.634867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.634886] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:20:05.811  [2024-12-09 17:09:28.637794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.637819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:05.811  [2024-12-09 17:09:28.637826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.913 ms
00:20:05.811  [2024-12-09 17:09:28.637832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.637872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.811  [2024-12-09 17:09:28.637881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:20:05.811  [2024-12-09 17:09:28.637887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:20:05.811  [2024-12-09 17:09:28.637894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.811  [2024-12-09 17:09:28.637910] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:20:05.811  [2024-12-09 17:09:28.637928] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:20:05.811  [2024-12-09 17:09:28.637955] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:20:05.811  [2024-12-09 17:09:28.637967] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:20:05.811  [2024-12-09 17:09:28.638050] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:20:05.811  [2024-12-09 17:09:28.638058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:20:05.811  [2024-12-09 17:09:28.638066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:20:05.811  [2024-12-09 17:09:28.638076] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:20:05.811  [2024-12-09 17:09:28.638083] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:20:05.811  [2024-12-09 17:09:28.638089] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:20:05.811  [2024-12-09 17:09:28.638095] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:20:05.811  [2024-12-09 17:09:28.638101] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:20:05.812  [2024-12-09 17:09:28.638107] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:20:05.812  [2024-12-09 17:09:28.638113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.638119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:20:05.812  [2024-12-09 17:09:28.638125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.205 ms
00:20:05.812  [2024-12-09 17:09:28.638131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.812  [2024-12-09 17:09:28.638197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.638206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:20:05.812  [2024-12-09 17:09:28.638212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:20:05.812  [2024-12-09 17:09:28.638218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.812  [2024-12-09 17:09:28.638294] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:20:05.812  [2024-12-09 17:09:28.638302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:20:05.812  [2024-12-09 17:09:28.638309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:20:05.812  [2024-12-09 17:09:28.638326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:20:05.812  [2024-12-09 17:09:28.638343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:05.812  [2024-12-09 17:09:28.638357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:20:05.812  [2024-12-09 17:09:28.638367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:20:05.812  [2024-12-09 17:09:28.638373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:05.812  [2024-12-09 17:09:28.638378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:20:05.812  [2024-12-09 17:09:28.638383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:20:05.812  [2024-12-09 17:09:28.638388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:20:05.812  [2024-12-09 17:09:28.638398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:20:05.812  [2024-12-09 17:09:28.638414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:20:05.812  [2024-12-09 17:09:28.638429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:20:05.812  [2024-12-09 17:09:28.638444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:20:05.812  [2024-12-09 17:09:28.638459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:20:05.812  [2024-12-09 17:09:28.638473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:05.812  [2024-12-09 17:09:28.638484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:20:05.812  [2024-12-09 17:09:28.638489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:20:05.812  [2024-12-09 17:09:28.638493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:05.812  [2024-12-09 17:09:28.638498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:20:05.812  [2024-12-09 17:09:28.638503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:20:05.812  [2024-12-09 17:09:28.638508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:20:05.812  [2024-12-09 17:09:28.638522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:20:05.812  [2024-12-09 17:09:28.638527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638532] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:20:05.812  [2024-12-09 17:09:28.638538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:20:05.812  [2024-12-09 17:09:28.638546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:05.812  [2024-12-09 17:09:28.638560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:20:05.812  [2024-12-09 17:09:28.638565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:20:05.812  [2024-12-09 17:09:28.638570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:20:05.812  [2024-12-09 17:09:28.638575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:20:05.812  [2024-12-09 17:09:28.638580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:20:05.812  [2024-12-09 17:09:28.638585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:20:05.812  [2024-12-09 17:09:28.638592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:20:05.812  [2024-12-09 17:09:28.638599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:20:05.812  [2024-12-09 17:09:28.638611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:20:05.812  [2024-12-09 17:09:28.638617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:20:05.812  [2024-12-09 17:09:28.638623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:20:05.812  [2024-12-09 17:09:28.638628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:20:05.812  [2024-12-09 17:09:28.638634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:20:05.812  [2024-12-09 17:09:28.638642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:20:05.812  [2024-12-09 17:09:28.638647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:20:05.812  [2024-12-09 17:09:28.638652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:20:05.812  [2024-12-09 17:09:28.638658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:20:05.812  [2024-12-09 17:09:28.638686] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:20:05.812  [2024-12-09 17:09:28.638692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:20:05.812  [2024-12-09 17:09:28.638704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:20:05.812  [2024-12-09 17:09:28.638710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:20:05.812  [2024-12-09 17:09:28.638716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:20:05.812  [2024-12-09 17:09:28.638721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.638729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:20:05.812  [2024-12-09 17:09:28.638735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.480 ms
00:20:05.812  [2024-12-09 17:09:28.638741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.812  [2024-12-09 17:09:28.662945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.662973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:05.812  [2024-12-09 17:09:28.662982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.142 ms
00:20:05.812  [2024-12-09 17:09:28.662989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.812  [2024-12-09 17:09:28.663084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.663092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:20:05.812  [2024-12-09 17:09:28.663098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:20:05.812  [2024-12-09 17:09:28.663104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.812  [2024-12-09 17:09:28.702950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.702983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:05.812  [2024-12-09 17:09:28.702995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.829 ms
00:20:05.812  [2024-12-09 17:09:28.703001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.812  [2024-12-09 17:09:28.703062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.812  [2024-12-09 17:09:28.703070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:05.812  [2024-12-09 17:09:28.703078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:20:05.813  [2024-12-09 17:09:28.703084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.703471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.703484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:05.813  [2024-12-09 17:09:28.703491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.372 ms
00:20:05.813  [2024-12-09 17:09:28.703502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.703615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.703623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:05.813  [2024-12-09 17:09:28.703630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.094 ms
00:20:05.813  [2024-12-09 17:09:28.703636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.715915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.715940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:05.813  [2024-12-09 17:09:28.715949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.261 ms
00:20:05.813  [2024-12-09 17:09:28.715954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.726261] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:20:05.813  [2024-12-09 17:09:28.726302] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:20:05.813  [2024-12-09 17:09:28.726312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.726319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:20:05.813  [2024-12-09 17:09:28.726327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.265 ms
00:20:05.813  [2024-12-09 17:09:28.726333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.744801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.744829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:20:05.813  [2024-12-09 17:09:28.744838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.413 ms
00:20:05.813  [2024-12-09 17:09:28.744852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.753754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.753780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:20:05.813  [2024-12-09 17:09:28.753787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.861 ms
00:20:05.813  [2024-12-09 17:09:28.753793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.762218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.762243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:20:05.813  [2024-12-09 17:09:28.762251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.385 ms
00:20:05.813  [2024-12-09 17:09:28.762256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.762718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.762739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:20:05.813  [2024-12-09 17:09:28.762747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.398 ms
00:20:05.813  [2024-12-09 17:09:28.762753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.810199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.810238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:20:05.813  [2024-12-09 17:09:28.810248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.427 ms
00:20:05.813  [2024-12-09 17:09:28.810255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.818446] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:20:05.813  [2024-12-09 17:09:28.832662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.832696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:20:05.813  [2024-12-09 17:09:28.832706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.344 ms
00:20:05.813  [2024-12-09 17:09:28.832717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.832802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.832811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:20:05.813  [2024-12-09 17:09:28.832818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:20:05.813  [2024-12-09 17:09:28.832824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.832884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.832893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:20:05.813  [2024-12-09 17:09:28.832900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:20:05.813  [2024-12-09 17:09:28.832910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.832938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.832946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:20:05.813  [2024-12-09 17:09:28.832952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:20:05.813  [2024-12-09 17:09:28.832958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:05.813  [2024-12-09 17:09:28.832987] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:20:05.813  [2024-12-09 17:09:28.832995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:05.813  [2024-12-09 17:09:28.833002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:20:05.813  [2024-12-09 17:09:28.833008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:20:05.813  [2024-12-09 17:09:28.833014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:06.074  [2024-12-09 17:09:28.851942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:06.074  [2024-12-09 17:09:28.851970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:20:06.074  [2024-12-09 17:09:28.851979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.909 ms
00:20:06.074  [2024-12-09 17:09:28.851986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:06.074  [2024-12-09 17:09:28.852058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:06.074  [2024-12-09 17:09:28.852066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:20:06.074  [2024-12-09 17:09:28.852073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.029 ms
00:20:06.074  [2024-12-09 17:09:28.852080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:06.074  [2024-12-09 17:09:28.853112] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:06.074  [2024-12-09 17:09:28.855538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 238.960 ms, result 0
00:20:06.074  [2024-12-09 17:09:28.856491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:06.074  [2024-12-09 17:09:28.867306] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:07.016  
[2024-12-09T17:09:30.999Z] Copying: 21/256 [MB] (21 MBps)
[2024-12-09T17:09:31.942Z] Copying: 36/256 [MB] (14 MBps)
[2024-12-09T17:09:32.886Z] Copying: 55/256 [MB] (18 MBps)
[2024-12-09T17:09:34.266Z] Copying: 75/256 [MB] (19 MBps)
[2024-12-09T17:09:35.204Z] Copying: 89/256 [MB] (14 MBps)
[2024-12-09T17:09:36.143Z] Copying: 105/256 [MB] (15 MBps)
[2024-12-09T17:09:37.084Z] Copying: 120/256 [MB] (15 MBps)
[2024-12-09T17:09:38.021Z] Copying: 135/256 [MB] (15 MBps)
[2024-12-09T17:09:38.960Z] Copying: 152/256 [MB] (17 MBps)
[2024-12-09T17:09:39.902Z] Copying: 167/256 [MB] (14 MBps)
[2024-12-09T17:09:41.289Z] Copying: 187/256 [MB] (19 MBps)
[2024-12-09T17:09:42.249Z] Copying: 204/256 [MB] (17 MBps)
[2024-12-09T17:09:43.201Z] Copying: 218/256 [MB] (13 MBps)
[2024-12-09T17:09:44.146Z] Copying: 235/256 [MB] (16 MBps)
[2024-12-09T17:09:44.408Z] Copying: 252/256 [MB] (16 MBps)
[2024-12-09T17:09:44.409Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-09 17:09:44.176124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:21.368  [2024-12-09 17:09:44.187022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.187077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:20:21.368  [2024-12-09 17:09:44.187105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:20:21.368  [2024-12-09 17:09:44.187115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.187140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:20:21.368  [2024-12-09 17:09:44.190511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.190558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:20:21.368  [2024-12-09 17:09:44.190570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.352 ms
00:20:21.368  [2024-12-09 17:09:44.190580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.190869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.190881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:20:21.368  [2024-12-09 17:09:44.190891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.263 ms
00:20:21.368  [2024-12-09 17:09:44.190899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.194612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.194640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:20:21.368  [2024-12-09 17:09:44.194650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.693 ms
00:20:21.368  [2024-12-09 17:09:44.194658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.201582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.201623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:20:21.368  [2024-12-09 17:09:44.201634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.906 ms
00:20:21.368  [2024-12-09 17:09:44.201643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.227792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.227861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:20:21.368  [2024-12-09 17:09:44.227876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.081 ms
00:20:21.368  [2024-12-09 17:09:44.227885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.245381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.245433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:20:21.368  [2024-12-09 17:09:44.245453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.431 ms
00:20:21.368  [2024-12-09 17:09:44.245463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.245603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.245614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:20:21.368  [2024-12-09 17:09:44.245636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.096 ms
00:20:21.368  [2024-12-09 17:09:44.245644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.272028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.272079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:20:21.368  [2024-12-09 17:09:44.272091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.366 ms
00:20:21.368  [2024-12-09 17:09:44.272097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.297090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.297140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:20:21.368  [2024-12-09 17:09:44.297151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.932 ms
00:20:21.368  [2024-12-09 17:09:44.297158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.321705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.321751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:20:21.368  [2024-12-09 17:09:44.321761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.498 ms
00:20:21.368  [2024-12-09 17:09:44.321769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.346443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.368  [2024-12-09 17:09:44.346492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:20:21.368  [2024-12-09 17:09:44.346504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.581 ms
00:20:21.368  [2024-12-09 17:09:44.346511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.368  [2024-12-09 17:09:44.346561] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:20:21.368  [2024-12-09 17:09:44.346580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.368  [2024-12-09 17:09:44.346956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.346963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.346971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.346979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.346986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.346994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:20:21.369  [2024-12-09 17:09:44.347392] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:20:21.369  [2024-12-09 17:09:44.347401] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:20:21.369  [2024-12-09 17:09:44.347411] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:20:21.369  [2024-12-09 17:09:44.347419] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:20:21.369  [2024-12-09 17:09:44.347428] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:20:21.369  [2024-12-09 17:09:44.347437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:20:21.369  [2024-12-09 17:09:44.347446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:20:21.369  [2024-12-09 17:09:44.347454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:20:21.369  [2024-12-09 17:09:44.347465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:20:21.369  [2024-12-09 17:09:44.347471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:20:21.369  [2024-12-09 17:09:44.347478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:20:21.369  [2024-12-09 17:09:44.347486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.369  [2024-12-09 17:09:44.347495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:20:21.369  [2024-12-09 17:09:44.347504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.927 ms
00:20:21.369  [2024-12-09 17:09:44.347512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.369  [2024-12-09 17:09:44.362126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.369  [2024-12-09 17:09:44.362169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:20:21.369  [2024-12-09 17:09:44.362180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.580 ms
00:20:21.369  [2024-12-09 17:09:44.362188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.369  [2024-12-09 17:09:44.362628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:21.369  [2024-12-09 17:09:44.362677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:20:21.369  [2024-12-09 17:09:44.362687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.397 ms
00:20:21.369  [2024-12-09 17:09:44.362694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.369  [2024-12-09 17:09:44.404253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.369  [2024-12-09 17:09:44.404307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:21.369  [2024-12-09 17:09:44.404320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.369  [2024-12-09 17:09:44.404335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.369  [2024-12-09 17:09:44.404421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.369  [2024-12-09 17:09:44.404431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:21.369  [2024-12-09 17:09:44.404439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.369  [2024-12-09 17:09:44.404448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.369  [2024-12-09 17:09:44.404517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.369  [2024-12-09 17:09:44.404528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:21.369  [2024-12-09 17:09:44.404537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.369  [2024-12-09 17:09:44.404545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.369  [2024-12-09 17:09:44.404567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.369  [2024-12-09 17:09:44.404577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:21.369  [2024-12-09 17:09:44.404585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.369  [2024-12-09 17:09:44.404593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.495396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.495469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:21.631  [2024-12-09 17:09:44.495483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.495493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:21.631  [2024-12-09 17:09:44.569108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:21.631  [2024-12-09 17:09:44.569214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:21.631  [2024-12-09 17:09:44.569289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:21.631  [2024-12-09 17:09:44.569430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:20:21.631  [2024-12-09 17:09:44.569500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:21.631  [2024-12-09 17:09:44.569585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:21.631  [2024-12-09 17:09:44.569668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:21.631  [2024-12-09 17:09:44.569677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:21.631  [2024-12-09 17:09:44.569687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:21.631  [2024-12-09 17:09:44.569916] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.838 ms, result 0
00:20:22.577  
00:20:22.577  
00:20:22.577   17:09:45 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero
00:20:22.577   17:09:45 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data
00:20:23.151   17:09:46 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:20:23.412  [2024-12-09 17:09:46.242427] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:20:23.412  [2024-12-09 17:09:46.242600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78355 ]
00:20:23.412  [2024-12-09 17:09:46.401769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:23.674  [2024-12-09 17:09:46.543406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:23.935  [2024-12-09 17:09:46.881078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:23.935  [2024-12-09 17:09:46.881179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:24.198  [2024-12-09 17:09:47.047273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.047344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:20:24.198  [2024-12-09 17:09:47.047363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:20:24.198  [2024-12-09 17:09:47.047372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.050642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.050699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:24.198  [2024-12-09 17:09:47.050712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.246 ms
00:20:24.198  [2024-12-09 17:09:47.050721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.050876] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:20:24.198  [2024-12-09 17:09:47.051717] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:20:24.198  [2024-12-09 17:09:47.051755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.051766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:24.198  [2024-12-09 17:09:47.051777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.912 ms
00:20:24.198  [2024-12-09 17:09:47.051786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.054139] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:20:24.198  [2024-12-09 17:09:47.069732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.069782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:20:24.198  [2024-12-09 17:09:47.069797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.594 ms
00:20:24.198  [2024-12-09 17:09:47.069806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.069942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.069957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:20:24.198  [2024-12-09 17:09:47.069967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:20:24.198  [2024-12-09 17:09:47.069976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.081134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.081174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:24.198  [2024-12-09 17:09:47.081187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.108 ms
00:20:24.198  [2024-12-09 17:09:47.081195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.081325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.081338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:24.198  [2024-12-09 17:09:47.081348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:20:24.198  [2024-12-09 17:09:47.081357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.081388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.081397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:20:24.198  [2024-12-09 17:09:47.081406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:20:24.198  [2024-12-09 17:09:47.081414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.081436] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:20:24.198  [2024-12-09 17:09:47.085988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.086023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:24.198  [2024-12-09 17:09:47.086034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.558 ms
00:20:24.198  [2024-12-09 17:09:47.086044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.086123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.086134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:20:24.198  [2024-12-09 17:09:47.086144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:20:24.198  [2024-12-09 17:09:47.086153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.086181] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:20:24.198  [2024-12-09 17:09:47.086210] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:20:24.198  [2024-12-09 17:09:47.086251] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:20:24.198  [2024-12-09 17:09:47.086269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:20:24.198  [2024-12-09 17:09:47.086384] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:20:24.198  [2024-12-09 17:09:47.086394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:20:24.198  [2024-12-09 17:09:47.086405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:20:24.198  [2024-12-09 17:09:47.086421] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:20:24.198  [2024-12-09 17:09:47.086433] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:20:24.198  [2024-12-09 17:09:47.086443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:20:24.198  [2024-12-09 17:09:47.086451] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:20:24.198  [2024-12-09 17:09:47.086459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:20:24.198  [2024-12-09 17:09:47.086469] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:20:24.198  [2024-12-09 17:09:47.086480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.086489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:20:24.198  [2024-12-09 17:09:47.086499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.302 ms
00:20:24.198  [2024-12-09 17:09:47.086511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.198  [2024-12-09 17:09:47.086601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.198  [2024-12-09 17:09:47.086624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:20:24.198  [2024-12-09 17:09:47.086634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.070 ms
00:20:24.198  [2024-12-09 17:09:47.086642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.086750] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:20:24.199  [2024-12-09 17:09:47.086769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:20:24.199  [2024-12-09 17:09:47.086779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:24.199  [2024-12-09 17:09:47.086789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:20:24.199  [2024-12-09 17:09:47.086806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:20:24.199  [2024-12-09 17:09:47.086822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:20:24.199  [2024-12-09 17:09:47.086831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:24.199  [2024-12-09 17:09:47.086862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:20:24.199  [2024-12-09 17:09:47.086880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:20:24.199  [2024-12-09 17:09:47.086889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:24.199  [2024-12-09 17:09:47.086897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:20:24.199  [2024-12-09 17:09:47.086905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:20:24.199  [2024-12-09 17:09:47.086913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:20:24.199  [2024-12-09 17:09:47.086927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:20:24.199  [2024-12-09 17:09:47.086933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:20:24.199  [2024-12-09 17:09:47.086949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:24.199  [2024-12-09 17:09:47.086962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:20:24.199  [2024-12-09 17:09:47.086972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:20:24.199  [2024-12-09 17:09:47.086979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:24.199  [2024-12-09 17:09:47.086987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:20:24.199  [2024-12-09 17:09:47.086995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:20:24.199  [2024-12-09 17:09:47.087003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:24.199  [2024-12-09 17:09:47.087012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:20:24.199  [2024-12-09 17:09:47.087019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:20:24.199  [2024-12-09 17:09:47.087027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:24.199  [2024-12-09 17:09:47.087036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:20:24.199  [2024-12-09 17:09:47.087044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:20:24.199  [2024-12-09 17:09:47.087052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:24.199  [2024-12-09 17:09:47.087059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:20:24.199  [2024-12-09 17:09:47.087067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:20:24.199  [2024-12-09 17:09:47.087075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:24.199  [2024-12-09 17:09:47.087082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:20:24.199  [2024-12-09 17:09:47.087089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:20:24.199  [2024-12-09 17:09:47.087097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.087105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:20:24.199  [2024-12-09 17:09:47.087112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:20:24.199  [2024-12-09 17:09:47.087120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.087126] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:20:24.199  [2024-12-09 17:09:47.087134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:20:24.199  [2024-12-09 17:09:47.087146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:24.199  [2024-12-09 17:09:47.087154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:24.199  [2024-12-09 17:09:47.087162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:20:24.199  [2024-12-09 17:09:47.087169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:20:24.199  [2024-12-09 17:09:47.087177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:20:24.199  [2024-12-09 17:09:47.087188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:20:24.199  [2024-12-09 17:09:47.087195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:20:24.199  [2024-12-09 17:09:47.087202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:20:24.199  [2024-12-09 17:09:47.087212] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:20:24.199  [2024-12-09 17:09:47.087223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:20:24.199  [2024-12-09 17:09:47.087243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:20:24.199  [2024-12-09 17:09:47.087251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:20:24.199  [2024-12-09 17:09:47.087259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:20:24.199  [2024-12-09 17:09:47.087268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:20:24.199  [2024-12-09 17:09:47.087280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:20:24.199  [2024-12-09 17:09:47.087289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:20:24.199  [2024-12-09 17:09:47.087297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:20:24.199  [2024-12-09 17:09:47.087307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:20:24.199  [2024-12-09 17:09:47.087315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:20:24.199  [2024-12-09 17:09:47.087357] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:20:24.199  [2024-12-09 17:09:47.087367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:20:24.199  [2024-12-09 17:09:47.087383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:20:24.199  [2024-12-09 17:09:47.087392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:20:24.199  [2024-12-09 17:09:47.087403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:20:24.199  [2024-12-09 17:09:47.087411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.087424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:20:24.199  [2024-12-09 17:09:47.087432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.731 ms
00:20:24.199  [2024-12-09 17:09:47.087440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.126473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.126527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:24.199  [2024-12-09 17:09:47.126540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.971 ms
00:20:24.199  [2024-12-09 17:09:47.126550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.126722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.126735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:20:24.199  [2024-12-09 17:09:47.126746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.073 ms
00:20:24.199  [2024-12-09 17:09:47.126755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.180666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.180732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:24.199  [2024-12-09 17:09:47.180752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 53.881 ms
00:20:24.199  [2024-12-09 17:09:47.180762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.180933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.180949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:24.199  [2024-12-09 17:09:47.180960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:20:24.199  [2024-12-09 17:09:47.180970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.181669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.181715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:24.199  [2024-12-09 17:09:47.181738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.670 ms
00:20:24.199  [2024-12-09 17:09:47.181749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.199  [2024-12-09 17:09:47.181953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.199  [2024-12-09 17:09:47.181965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:24.199  [2024-12-09 17:09:47.181975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.166 ms
00:20:24.200  [2024-12-09 17:09:47.181983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.200  [2024-12-09 17:09:47.201030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.200  [2024-12-09 17:09:47.201083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:24.200  [2024-12-09 17:09:47.201096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.021 ms
00:20:24.200  [2024-12-09 17:09:47.201106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.200  [2024-12-09 17:09:47.216715] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:20:24.200  [2024-12-09 17:09:47.216770] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:20:24.200  [2024-12-09 17:09:47.216785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.200  [2024-12-09 17:09:47.216795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:20:24.200  [2024-12-09 17:09:47.216806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.537 ms
00:20:24.200  [2024-12-09 17:09:47.216815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.243345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.243405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:20:24.462  [2024-12-09 17:09:47.243419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.403 ms
00:20:24.462  [2024-12-09 17:09:47.243429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.256805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.256866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:20:24.462  [2024-12-09 17:09:47.256880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.294 ms
00:20:24.462  [2024-12-09 17:09:47.256889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.269808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.269871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:20:24.462  [2024-12-09 17:09:47.269884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.823 ms
00:20:24.462  [2024-12-09 17:09:47.269892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.270605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.270640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:20:24.462  [2024-12-09 17:09:47.270652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.584 ms
00:20:24.462  [2024-12-09 17:09:47.270661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.344013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.344122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:20:24.462  [2024-12-09 17:09:47.344140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 73.319 ms
00:20:24.462  [2024-12-09 17:09:47.344150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.356187] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:20:24.462  [2024-12-09 17:09:47.381394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.381456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:20:24.462  [2024-12-09 17:09:47.381471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.130 ms
00:20:24.462  [2024-12-09 17:09:47.381487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.381602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.381617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:20:24.462  [2024-12-09 17:09:47.381630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:20:24.462  [2024-12-09 17:09:47.381641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.381716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.381727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:20:24.462  [2024-12-09 17:09:47.381737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:20:24.462  [2024-12-09 17:09:47.381751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.381792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.381803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:20:24.462  [2024-12-09 17:09:47.381812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:20:24.462  [2024-12-09 17:09:47.381821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.381902] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:20:24.462  [2024-12-09 17:09:47.381916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.381926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:20:24.462  [2024-12-09 17:09:47.381935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:20:24.462  [2024-12-09 17:09:47.381945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.409306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.409361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:20:24.462  [2024-12-09 17:09:47.409376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.334 ms
00:20:24.462  [2024-12-09 17:09:47.409385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.409513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.462  [2024-12-09 17:09:47.409527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:20:24.462  [2024-12-09 17:09:47.409538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:20:24.462  [2024-12-09 17:09:47.409547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.462  [2024-12-09 17:09:47.410940] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:24.462  [2024-12-09 17:09:47.414593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 363.233 ms, result 0
00:20:24.462  [2024-12-09 17:09:47.416054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:24.462  [2024-12-09 17:09:47.429885] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:24.723  
[2024-12-09T17:09:47.764Z] Copying: 4096/4096 [kB] (average 12 MBps)[2024-12-09 17:09:47.748395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:24.723  [2024-12-09 17:09:47.757425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.723  [2024-12-09 17:09:47.757475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:20:24.723  [2024-12-09 17:09:47.757496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:20:24.723  [2024-12-09 17:09:47.757505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.723  [2024-12-09 17:09:47.757529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:20:24.723  [2024-12-09 17:09:47.760852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.723  [2024-12-09 17:09:47.760895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:20:24.723  [2024-12-09 17:09:47.760907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.301 ms
00:20:24.723  [2024-12-09 17:09:47.760916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.764487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.764546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:20:24.985  [2024-12-09 17:09:47.764558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.543 ms
00:20:24.985  [2024-12-09 17:09:47.764566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.768831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.768881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:20:24.985  [2024-12-09 17:09:47.768893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.243 ms
00:20:24.985  [2024-12-09 17:09:47.768902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.775870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.775914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:20:24.985  [2024-12-09 17:09:47.775925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.933 ms
00:20:24.985  [2024-12-09 17:09:47.775935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.801906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.801954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:20:24.985  [2024-12-09 17:09:47.801967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.915 ms
00:20:24.985  [2024-12-09 17:09:47.801975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.819760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.819817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:20:24.985  [2024-12-09 17:09:47.819831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.694 ms
00:20:24.985  [2024-12-09 17:09:47.819840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.819990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.820005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:20:24.985  [2024-12-09 17:09:47.820027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.096 ms
00:20:24.985  [2024-12-09 17:09:47.820035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.985  [2024-12-09 17:09:47.846550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.985  [2024-12-09 17:09:47.846600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:20:24.985  [2024-12-09 17:09:47.846613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.496 ms
00:20:24.986  [2024-12-09 17:09:47.846620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.986  [2024-12-09 17:09:47.872424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.986  [2024-12-09 17:09:47.872471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:20:24.986  [2024-12-09 17:09:47.872483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.740 ms
00:20:24.986  [2024-12-09 17:09:47.872499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.986  [2024-12-09 17:09:47.897618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.986  [2024-12-09 17:09:47.897663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:20:24.986  [2024-12-09 17:09:47.897675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.055 ms
00:20:24.986  [2024-12-09 17:09:47.897683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.986  [2024-12-09 17:09:47.922772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.986  [2024-12-09 17:09:47.922819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:20:24.986  [2024-12-09 17:09:47.922831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.992 ms
00:20:24.986  [2024-12-09 17:09:47.922839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.986  [2024-12-09 17:09:47.922903] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:20:24.986  [2024-12-09 17:09:47.922922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.922992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.986  [2024-12-09 17:09:47.923557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:20:24.987  [2024-12-09 17:09:47.923760] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:20:24.987  [2024-12-09 17:09:47.923768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:20:24.987  [2024-12-09 17:09:47.923778] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:20:24.987  [2024-12-09 17:09:47.923785] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:20:24.987  [2024-12-09 17:09:47.923792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:20:24.987  [2024-12-09 17:09:47.923806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:20:24.987  [2024-12-09 17:09:47.923814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:20:24.987  [2024-12-09 17:09:47.923822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:20:24.987  [2024-12-09 17:09:47.923833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:20:24.987  [2024-12-09 17:09:47.923839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:20:24.987  [2024-12-09 17:09:47.923859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:20:24.987  [2024-12-09 17:09:47.923867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.987  [2024-12-09 17:09:47.923875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:20:24.987  [2024-12-09 17:09:47.923885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.965 ms
00:20:24.987  [2024-12-09 17:09:47.923894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.987  [2024-12-09 17:09:47.937704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.987  [2024-12-09 17:09:47.937747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:20:24.987  [2024-12-09 17:09:47.937758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.777 ms
00:20:24.987  [2024-12-09 17:09:47.937766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.987  [2024-12-09 17:09:47.938216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:24.987  [2024-12-09 17:09:47.938240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:20:24.987  [2024-12-09 17:09:47.938251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.407 ms
00:20:24.987  [2024-12-09 17:09:47.938259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.987  [2024-12-09 17:09:47.980629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:24.987  [2024-12-09 17:09:47.980680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:24.987  [2024-12-09 17:09:47.980693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:24.987  [2024-12-09 17:09:47.980710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.987  [2024-12-09 17:09:47.980791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:24.987  [2024-12-09 17:09:47.980801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:24.987  [2024-12-09 17:09:47.980810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:24.987  [2024-12-09 17:09:47.980819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.987  [2024-12-09 17:09:47.980895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:24.987  [2024-12-09 17:09:47.980908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:24.987  [2024-12-09 17:09:47.980918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:24.987  [2024-12-09 17:09:47.980926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:24.987  [2024-12-09 17:09:47.980950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:24.987  [2024-12-09 17:09:47.980959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:24.987  [2024-12-09 17:09:47.980967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:24.987  [2024-12-09 17:09:47.980975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.074388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.074463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:25.249  [2024-12-09 17:09:48.074477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.074494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:25.249  [2024-12-09 17:09:48.150100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:25.249  [2024-12-09 17:09:48.150204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:25.249  [2024-12-09 17:09:48.150279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:25.249  [2024-12-09 17:09:48.150423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:20:25.249  [2024-12-09 17:09:48.150493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:25.249  [2024-12-09 17:09:48.150576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:25.249  [2024-12-09 17:09:48.150664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:25.249  [2024-12-09 17:09:48.150674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:25.249  [2024-12-09 17:09:48.150682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:25.249  [2024-12-09 17:09:48.150906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.420 ms, result 0
00:20:26.194  
00:20:26.194  
00:20:26.194   17:09:49 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78386
00:20:26.194   17:09:49 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init
00:20:26.194   17:09:49 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78386
00:20:26.194   17:09:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78386 ']'
00:20:26.194   17:09:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:26.194  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:26.194   17:09:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:26.194   17:09:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:26.194   17:09:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:26.194   17:09:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:20:26.194  [2024-12-09 17:09:49.126327] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:20:26.194  [2024-12-09 17:09:49.126495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78386 ]
00:20:26.456  [2024-12-09 17:09:49.288190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:26.456  [2024-12-09 17:09:49.431956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:27.399   17:09:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:27.399   17:09:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:20:27.399   17:09:50 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config
00:20:27.660  [2024-12-09 17:09:50.469814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:27.660  [2024-12-09 17:09:50.469944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:27.660  [2024-12-09 17:09:50.630034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.660  [2024-12-09 17:09:50.630098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:20:27.660  [2024-12-09 17:09:50.630120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:20:27.660  [2024-12-09 17:09:50.630130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.633353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.633405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:27.661  [2024-12-09 17:09:50.633419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.199 ms
00:20:27.661  [2024-12-09 17:09:50.633428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.633578] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:20:27.661  [2024-12-09 17:09:50.634348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:20:27.661  [2024-12-09 17:09:50.634386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.634396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:27.661  [2024-12-09 17:09:50.634409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.823 ms
00:20:27.661  [2024-12-09 17:09:50.634417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.636804] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:20:27.661  [2024-12-09 17:09:50.652333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.652390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:20:27.661  [2024-12-09 17:09:50.652406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.536 ms
00:20:27.661  [2024-12-09 17:09:50.652418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.652557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.652574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:20:27.661  [2024-12-09 17:09:50.652584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.033 ms
00:20:27.661  [2024-12-09 17:09:50.652595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.664119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.664170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:27.661  [2024-12-09 17:09:50.664183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.462 ms
00:20:27.661  [2024-12-09 17:09:50.664194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.664336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.664352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:27.661  [2024-12-09 17:09:50.664362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.082 ms
00:20:27.661  [2024-12-09 17:09:50.664376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.664404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.664416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:20:27.661  [2024-12-09 17:09:50.664425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:20:27.661  [2024-12-09 17:09:50.664436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.664462] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:20:27.661  [2024-12-09 17:09:50.669123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.669165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:27.661  [2024-12-09 17:09:50.669180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.664 ms
00:20:27.661  [2024-12-09 17:09:50.669189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.669261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.669271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:20:27.661  [2024-12-09 17:09:50.669284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:20:27.661  [2024-12-09 17:09:50.669295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.669322] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:20:27.661  [2024-12-09 17:09:50.669350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:20:27.661  [2024-12-09 17:09:50.669404] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:20:27.661  [2024-12-09 17:09:50.669423] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:20:27.661  [2024-12-09 17:09:50.669538] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:20:27.661  [2024-12-09 17:09:50.669551] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:20:27.661  [2024-12-09 17:09:50.669568] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:20:27.661  [2024-12-09 17:09:50.669581] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:20:27.661  [2024-12-09 17:09:50.669595] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:20:27.661  [2024-12-09 17:09:50.669604] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:20:27.661  [2024-12-09 17:09:50.669615] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:20:27.661  [2024-12-09 17:09:50.669623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:20:27.661  [2024-12-09 17:09:50.669636] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:20:27.661  [2024-12-09 17:09:50.669647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.669659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:20:27.661  [2024-12-09 17:09:50.669667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.332 ms
00:20:27.661  [2024-12-09 17:09:50.669677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.669768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.661  [2024-12-09 17:09:50.669781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:20:27.661  [2024-12-09 17:09:50.669789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:20:27.661  [2024-12-09 17:09:50.669799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.661  [2024-12-09 17:09:50.669925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:20:27.661  [2024-12-09 17:09:50.670000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:20:27.661  [2024-12-09 17:09:50.670009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:27.661  [2024-12-09 17:09:50.670021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:27.661  [2024-12-09 17:09:50.670030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:20:27.661  [2024-12-09 17:09:50.670045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:20:27.661  [2024-12-09 17:09:50.670054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:20:27.661  [2024-12-09 17:09:50.670068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:20:27.661  [2024-12-09 17:09:50.670076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:20:27.661  [2024-12-09 17:09:50.670087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:27.661  [2024-12-09 17:09:50.670096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:20:27.662  [2024-12-09 17:09:50.670107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:20:27.662  [2024-12-09 17:09:50.670114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:27.662  [2024-12-09 17:09:50.670124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:20:27.662  [2024-12-09 17:09:50.670133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:20:27.662  [2024-12-09 17:09:50.670144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:20:27.662  [2024-12-09 17:09:50.670164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:20:27.662  [2024-12-09 17:09:50.670206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:20:27.662  [2024-12-09 17:09:50.670235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:20:27.662  [2024-12-09 17:09:50.670260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:20:27.662  [2024-12-09 17:09:50.670291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:20:27.662  [2024-12-09 17:09:50.670315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:27.662  [2024-12-09 17:09:50.670334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:20:27.662  [2024-12-09 17:09:50.670344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:20:27.662  [2024-12-09 17:09:50.670351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:27.662  [2024-12-09 17:09:50.670360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:20:27.662  [2024-12-09 17:09:50.670368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:20:27.662  [2024-12-09 17:09:50.670380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:20:27.662  [2024-12-09 17:09:50.670398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:20:27.662  [2024-12-09 17:09:50.670406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670415] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:20:27.662  [2024-12-09 17:09:50.670425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:20:27.662  [2024-12-09 17:09:50.670436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:27.662  [2024-12-09 17:09:50.670459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:20:27.662  [2024-12-09 17:09:50.670467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:20:27.662  [2024-12-09 17:09:50.670477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:20:27.662  [2024-12-09 17:09:50.670484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:20:27.662  [2024-12-09 17:09:50.670494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:20:27.662  [2024-12-09 17:09:50.670502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:20:27.662  [2024-12-09 17:09:50.670516] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:20:27.662  [2024-12-09 17:09:50.670526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:20:27.662  [2024-12-09 17:09:50.670549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:20:27.662  [2024-12-09 17:09:50.670560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:20:27.662  [2024-12-09 17:09:50.670568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:20:27.662  [2024-12-09 17:09:50.670578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:20:27.662  [2024-12-09 17:09:50.670587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:20:27.662  [2024-12-09 17:09:50.670599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:20:27.662  [2024-12-09 17:09:50.670607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:20:27.662  [2024-12-09 17:09:50.670619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:20:27.662  [2024-12-09 17:09:50.670628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:20:27.662  [2024-12-09 17:09:50.670677] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:20:27.662  [2024-12-09 17:09:50.670687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:20:27.662  [2024-12-09 17:09:50.670709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:20:27.662  [2024-12-09 17:09:50.670719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:20:27.662  [2024-12-09 17:09:50.670726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:20:27.662  [2024-12-09 17:09:50.670737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.662  [2024-12-09 17:09:50.670747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:20:27.662  [2024-12-09 17:09:50.670757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.904 ms
00:20:27.662  [2024-12-09 17:09:50.670767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.709312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.709363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:27.925  [2024-12-09 17:09:50.709378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.478 ms
00:20:27.925  [2024-12-09 17:09:50.709391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.709535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.709548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:20:27.925  [2024-12-09 17:09:50.709560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.068 ms
00:20:27.925  [2024-12-09 17:09:50.709570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.748825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.748888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:27.925  [2024-12-09 17:09:50.748903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.226 ms
00:20:27.925  [2024-12-09 17:09:50.748912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.749007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.749019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:27.925  [2024-12-09 17:09:50.749031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:20:27.925  [2024-12-09 17:09:50.749040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.749671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.749716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:27.925  [2024-12-09 17:09:50.749730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.602 ms
00:20:27.925  [2024-12-09 17:09:50.749740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.749929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.749955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:27.925  [2024-12-09 17:09:50.749968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.156 ms
00:20:27.925  [2024-12-09 17:09:50.749978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.769907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.769948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:27.925  [2024-12-09 17:09:50.769963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.900 ms
00:20:27.925  [2024-12-09 17:09:50.769972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.796939] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:20:27.925  [2024-12-09 17:09:50.796983] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:20:27.925  [2024-12-09 17:09:50.797000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.797010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:20:27.925  [2024-12-09 17:09:50.797023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.905 ms
00:20:27.925  [2024-12-09 17:09:50.797037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.821621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.821656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:20:27.925  [2024-12-09 17:09:50.821671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.497 ms
00:20:27.925  [2024-12-09 17:09:50.821681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.833438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.833470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:20:27.925  [2024-12-09 17:09:50.833484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.682 ms
00:20:27.925  [2024-12-09 17:09:50.833492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.844729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.844760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:20:27.925  [2024-12-09 17:09:50.844772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.170 ms
00:20:27.925  [2024-12-09 17:09:50.844780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.845407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.845432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:20:27.925  [2024-12-09 17:09:50.845444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.524 ms
00:20:27.925  [2024-12-09 17:09:50.845452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.904526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.904564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:20:27.925  [2024-12-09 17:09:50.904578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 59.048 ms
00:20:27.925  [2024-12-09 17:09:50.904587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.915087] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:20:27.925  [2024-12-09 17:09:50.931602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.931642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:20:27.925  [2024-12-09 17:09:50.931658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.944 ms
00:20:27.925  [2024-12-09 17:09:50.931670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.931744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.931757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:20:27.925  [2024-12-09 17:09:50.931766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:20:27.925  [2024-12-09 17:09:50.931776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.931831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.931842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:20:27.925  [2024-12-09 17:09:50.931872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:20:27.925  [2024-12-09 17:09:50.931884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.931909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.931920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:20:27.925  [2024-12-09 17:09:50.931929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:20:27.925  [2024-12-09 17:09:50.931940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.931973] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:20:27.925  [2024-12-09 17:09:50.931987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.931998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:20:27.925  [2024-12-09 17:09:50.932008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:20:27.925  [2024-12-09 17:09:50.932017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.956439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.956473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:20:27.925  [2024-12-09 17:09:50.956488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.395 ms
00:20:27.925  [2024-12-09 17:09:50.956505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.956597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:27.925  [2024-12-09 17:09:50.956609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:20:27.925  [2024-12-09 17:09:50.956620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:20:27.925  [2024-12-09 17:09:50.956631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:27.925  [2024-12-09 17:09:50.957584] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:27.925  [2024-12-09 17:09:50.960569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.256 ms, result 0
00:20:28.186  [2024-12-09 17:09:50.963059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:28.186  Some configs were skipped because the RPC state that can call them passed over.
00:20:28.186   17:09:50 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024
00:20:28.186  [2024-12-09 17:09:51.195308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:28.186  [2024-12-09 17:09:51.195380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:20:28.186  [2024-12-09 17:09:51.195396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.330 ms
00:20:28.186  [2024-12-09 17:09:51.195408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:28.186  [2024-12-09 17:09:51.195445] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.471 ms, result 0
00:20:28.186  true
00:20:28.447   17:09:51 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024
00:20:28.447  [2024-12-09 17:09:51.427331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:28.447  [2024-12-09 17:09:51.427390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:20:28.447  [2024-12-09 17:09:51.427407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.982 ms
00:20:28.447  [2024-12-09 17:09:51.427416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:28.447  [2024-12-09 17:09:51.427457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.114 ms, result 0
00:20:28.447  true
00:20:28.447   17:09:51 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78386
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78386 ']'
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78386
00:20:28.447    17:09:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:28.447    17:09:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78386
00:20:28.447  killing process with pid 78386
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78386'
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78386
00:20:28.447   17:09:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78386
00:20:29.388  [2024-12-09 17:09:52.143218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.143275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:20:29.388  [2024-12-09 17:09:52.143288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:20:29.388  [2024-12-09 17:09:52.143296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.143317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:20:29.388  [2024-12-09 17:09:52.145560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.145586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:20:29.388  [2024-12-09 17:09:52.145599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.227 ms
00:20:29.388  [2024-12-09 17:09:52.145606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.145862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.145872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:20:29.388  [2024-12-09 17:09:52.145881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.220 ms
00:20:29.388  [2024-12-09 17:09:52.145888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.149489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.149516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:20:29.388  [2024-12-09 17:09:52.149527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.583 ms
00:20:29.388  [2024-12-09 17:09:52.149533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.154735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.154762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:20:29.388  [2024-12-09 17:09:52.154775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.169 ms
00:20:29.388  [2024-12-09 17:09:52.154782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.163090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.163121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:20:29.388  [2024-12-09 17:09:52.163133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.260 ms
00:20:29.388  [2024-12-09 17:09:52.163139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.169986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.170014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:20:29.388  [2024-12-09 17:09:52.170024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.812 ms
00:20:29.388  [2024-12-09 17:09:52.170031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.170144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.170153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:20:29.388  [2024-12-09 17:09:52.170162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.074 ms
00:20:29.388  [2024-12-09 17:09:52.170169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.178810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.178833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:20:29.388  [2024-12-09 17:09:52.178842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.623 ms
00:20:29.388  [2024-12-09 17:09:52.178856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.186852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.186876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:20:29.388  [2024-12-09 17:09:52.186890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.958 ms
00:20:29.388  [2024-12-09 17:09:52.186896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.194322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.194346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:20:29.388  [2024-12-09 17:09:52.194355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.394 ms
00:20:29.388  [2024-12-09 17:09:52.194361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.201746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.388  [2024-12-09 17:09:52.201770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:20:29.388  [2024-12-09 17:09:52.201780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.321 ms
00:20:29.388  [2024-12-09 17:09:52.201785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.388  [2024-12-09 17:09:52.201814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:20:29.388  [2024-12-09 17:09:52.201826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.388  [2024-12-09 17:09:52.201983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.201993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.201999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:20:29.389  [2024-12-09 17:09:52.202538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:20:29.389  [2024-12-09 17:09:52.202549] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:20:29.389  [2024-12-09 17:09:52.202558] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:20:29.389  [2024-12-09 17:09:52.202565] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:20:29.389  [2024-12-09 17:09:52.202571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:20:29.389  [2024-12-09 17:09:52.202579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:20:29.389  [2024-12-09 17:09:52.202585] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:20:29.389  [2024-12-09 17:09:52.202592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:20:29.389  [2024-12-09 17:09:52.202598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:20:29.389  [2024-12-09 17:09:52.202604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:20:29.389  [2024-12-09 17:09:52.202609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:20:29.389  [2024-12-09 17:09:52.202616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.390  [2024-12-09 17:09:52.202622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:20:29.390  [2024-12-09 17:09:52.202631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.803 ms
00:20:29.390  [2024-12-09 17:09:52.202638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.212869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.390  [2024-12-09 17:09:52.212894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:20:29.390  [2024-12-09 17:09:52.212906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.212 ms
00:20:29.390  [2024-12-09 17:09:52.212912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.213224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:29.390  [2024-12-09 17:09:52.213239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:20:29.390  [2024-12-09 17:09:52.213249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.277 ms
00:20:29.390  [2024-12-09 17:09:52.213255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.250102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.250130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:29.390  [2024-12-09 17:09:52.250140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.250146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.251279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.251302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:29.390  [2024-12-09 17:09:52.251314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.251320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.251360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.251368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:29.390  [2024-12-09 17:09:52.251377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.251383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.251398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.251405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:29.390  [2024-12-09 17:09:52.251414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.251421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.313883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.313918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:29.390  [2024-12-09 17:09:52.313928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.313935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:29.390  [2024-12-09 17:09:52.365413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:29.390  [2024-12-09 17:09:52.365515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:29.390  [2024-12-09 17:09:52.365563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:29.390  [2024-12-09 17:09:52.365664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:20:29.390  [2024-12-09 17:09:52.365715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:29.390  [2024-12-09 17:09:52.365778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:29.390  [2024-12-09 17:09:52.365834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:29.390  [2024-12-09 17:09:52.365843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:29.390  [2024-12-09 17:09:52.365860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:29.390  [2024-12-09 17:09:52.365989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 222.742 ms, result 0
00:20:29.961   17:09:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:20:29.961  [2024-12-09 17:09:52.981418] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:20:29.961  [2024-12-09 17:09:52.981519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78445 ]
00:20:30.222  [2024-12-09 17:09:53.125672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:30.222  [2024-12-09 17:09:53.217243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:30.483  [2024-12-09 17:09:53.450737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:30.483  [2024-12-09 17:09:53.450797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:20:30.743  [2024-12-09 17:09:53.607581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.743  [2024-12-09 17:09:53.607619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:20:30.743  [2024-12-09 17:09:53.607631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:20:30.743  [2024-12-09 17:09:53.607637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.743  [2024-12-09 17:09:53.609887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.743  [2024-12-09 17:09:53.609916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:30.743  [2024-12-09 17:09:53.609924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.237 ms
00:20:30.743  [2024-12-09 17:09:53.609930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.743  [2024-12-09 17:09:53.609995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:20:30.743  [2024-12-09 17:09:53.610555] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:20:30.743  [2024-12-09 17:09:53.610572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.610579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:30.744  [2024-12-09 17:09:53.610587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.584 ms
00:20:30.744  [2024-12-09 17:09:53.610594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.611914] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:20:30.744  [2024-12-09 17:09:53.622364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.622392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:20:30.744  [2024-12-09 17:09:53.622402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.452 ms
00:20:30.744  [2024-12-09 17:09:53.622409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.622480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.622490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:20:30.744  [2024-12-09 17:09:53.622497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:20:30.744  [2024-12-09 17:09:53.622503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.628723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.628748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:30.744  [2024-12-09 17:09:53.628756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.188 ms
00:20:30.744  [2024-12-09 17:09:53.628762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.628834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.628842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:30.744  [2024-12-09 17:09:53.628860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:20:30.744  [2024-12-09 17:09:53.628867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.628890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.628898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:20:30.744  [2024-12-09 17:09:53.628905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:20:30.744  [2024-12-09 17:09:53.628912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.628928] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:20:30.744  [2024-12-09 17:09:53.631836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.631872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:30.744  [2024-12-09 17:09:53.631880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.911 ms
00:20:30.744  [2024-12-09 17:09:53.631885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.631919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.631926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:20:30.744  [2024-12-09 17:09:53.631932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:20:30.744  [2024-12-09 17:09:53.631938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.631954] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:20:30.744  [2024-12-09 17:09:53.631971] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:20:30.744  [2024-12-09 17:09:53.632000] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:20:30.744  [2024-12-09 17:09:53.632013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:20:30.744  [2024-12-09 17:09:53.632095] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:20:30.744  [2024-12-09 17:09:53.632105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:20:30.744  [2024-12-09 17:09:53.632113] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:20:30.744  [2024-12-09 17:09:53.632123] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632130] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632137] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:20:30.744  [2024-12-09 17:09:53.632143] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:20:30.744  [2024-12-09 17:09:53.632149] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:20:30.744  [2024-12-09 17:09:53.632155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:20:30.744  [2024-12-09 17:09:53.632161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.632167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:20:30.744  [2024-12-09 17:09:53.632172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.209 ms
00:20:30.744  [2024-12-09 17:09:53.632178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.632245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.744  [2024-12-09 17:09:53.632254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:20:30.744  [2024-12-09 17:09:53.632264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:20:30.744  [2024-12-09 17:09:53.632270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.744  [2024-12-09 17:09:53.632348] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:20:30.744  [2024-12-09 17:09:53.632357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:20:30.744  [2024-12-09 17:09:53.632363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:20:30.744  [2024-12-09 17:09:53.632383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:20:30.744  [2024-12-09 17:09:53.632400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:30.744  [2024-12-09 17:09:53.632412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:20:30.744  [2024-12-09 17:09:53.632423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:20:30.744  [2024-12-09 17:09:53.632428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:30.744  [2024-12-09 17:09:53.632433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:20:30.744  [2024-12-09 17:09:53.632439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:20:30.744  [2024-12-09 17:09:53.632444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:20:30.744  [2024-12-09 17:09:53.632456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:20:30.744  [2024-12-09 17:09:53.632473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:20:30.744  [2024-12-09 17:09:53.632506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:20:30.744  [2024-12-09 17:09:53.632523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:20:30.744  [2024-12-09 17:09:53.632540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:20:30.744  [2024-12-09 17:09:53.632554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:30.744  [2024-12-09 17:09:53.632564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:20:30.744  [2024-12-09 17:09:53.632569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:20:30.744  [2024-12-09 17:09:53.632574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:30.744  [2024-12-09 17:09:53.632579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:20:30.744  [2024-12-09 17:09:53.632584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:20:30.744  [2024-12-09 17:09:53.632589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:20:30.744  [2024-12-09 17:09:53.632601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:20:30.744  [2024-12-09 17:09:53.632606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632612] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:20:30.744  [2024-12-09 17:09:53.632619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:20:30.744  [2024-12-09 17:09:53.632626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:30.744  [2024-12-09 17:09:53.632632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:30.744  [2024-12-09 17:09:53.632638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:20:30.744  [2024-12-09 17:09:53.632643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:20:30.744  [2024-12-09 17:09:53.632648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:20:30.744  [2024-12-09 17:09:53.632653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:20:30.744  [2024-12-09 17:09:53.632658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:20:30.744  [2024-12-09 17:09:53.632663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:20:30.745  [2024-12-09 17:09:53.632669] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:20:30.745  [2024-12-09 17:09:53.632676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:20:30.745  [2024-12-09 17:09:53.632689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:20:30.745  [2024-12-09 17:09:53.632695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:20:30.745  [2024-12-09 17:09:53.632700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:20:30.745  [2024-12-09 17:09:53.632706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:20:30.745  [2024-12-09 17:09:53.632711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:20:30.745  [2024-12-09 17:09:53.632717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:20:30.745  [2024-12-09 17:09:53.632722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:20:30.745  [2024-12-09 17:09:53.632728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:20:30.745  [2024-12-09 17:09:53.632733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:20:30.745  [2024-12-09 17:09:53.632760] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:20:30.745  [2024-12-09 17:09:53.632766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:20:30.745  [2024-12-09 17:09:53.632779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:20:30.745  [2024-12-09 17:09:53.632785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:20:30.745  [2024-12-09 17:09:53.632790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:20:30.745  [2024-12-09 17:09:53.632796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.632804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:20:30.745  [2024-12-09 17:09:53.632810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.501 ms
00:20:30.745  [2024-12-09 17:09:53.632816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.657054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.657085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:30.745  [2024-12-09 17:09:53.657094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.163 ms
00:20:30.745  [2024-12-09 17:09:53.657101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.657202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.657211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:20:30.745  [2024-12-09 17:09:53.657218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:20:30.745  [2024-12-09 17:09:53.657224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.696253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.696287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:30.745  [2024-12-09 17:09:53.696299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.011 ms
00:20:30.745  [2024-12-09 17:09:53.696306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.696368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.696377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:30.745  [2024-12-09 17:09:53.696384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:20:30.745  [2024-12-09 17:09:53.696391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.696796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.696816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:30.745  [2024-12-09 17:09:53.696824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.389 ms
00:20:30.745  [2024-12-09 17:09:53.696834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.696963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.696973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:30.745  [2024-12-09 17:09:53.696980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.097 ms
00:20:30.745  [2024-12-09 17:09:53.696986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.709212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.709237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:30.745  [2024-12-09 17:09:53.709246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.210 ms
00:20:30.745  [2024-12-09 17:09:53.709252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.719910] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:20:30.745  [2024-12-09 17:09:53.719937] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:20:30.745  [2024-12-09 17:09:53.719947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.719954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:20:30.745  [2024-12-09 17:09:53.719961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.599 ms
00:20:30.745  [2024-12-09 17:09:53.719968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.738885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.738912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:20:30.745  [2024-12-09 17:09:53.738922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.856 ms
00:20:30.745  [2024-12-09 17:09:53.738930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.748248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.748273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:20:30.745  [2024-12-09 17:09:53.748280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.264 ms
00:20:30.745  [2024-12-09 17:09:53.748286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.757177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.757201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:20:30.745  [2024-12-09 17:09:53.757208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.849 ms
00:20:30.745  [2024-12-09 17:09:53.757214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:30.745  [2024-12-09 17:09:53.757681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:30.745  [2024-12-09 17:09:53.757704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:20:30.745  [2024-12-09 17:09:53.757712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.402 ms
00:20:30.745  [2024-12-09 17:09:53.757718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.804875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.804913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:20:31.006  [2024-12-09 17:09:53.804923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.138 ms
00:20:31.006  [2024-12-09 17:09:53.804931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.812738] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:20:31.006  [2024-12-09 17:09:53.826800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.826833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:20:31.006  [2024-12-09 17:09:53.826843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.804 ms
00:20:31.006  [2024-12-09 17:09:53.826863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.826945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.826954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:20:31.006  [2024-12-09 17:09:53.826962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:20:31.006  [2024-12-09 17:09:53.826969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.827013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.827021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:20:31.006  [2024-12-09 17:09:53.827028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:20:31.006  [2024-12-09 17:09:53.827037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.827064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.827072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:20:31.006  [2024-12-09 17:09:53.827078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:20:31.006  [2024-12-09 17:09:53.827084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.827111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:20:31.006  [2024-12-09 17:09:53.827118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.827124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:20:31.006  [2024-12-09 17:09:53.827130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:20:31.006  [2024-12-09 17:09:53.827136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.845485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.845514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:20:31.006  [2024-12-09 17:09:53.845522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.335 ms
00:20:31.006  [2024-12-09 17:09:53.845529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.845603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:31.006  [2024-12-09 17:09:53.845611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:20:31.006  [2024-12-09 17:09:53.845619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:20:31.006  [2024-12-09 17:09:53.845625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:31.006  [2024-12-09 17:09:53.846527] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:31.006  [2024-12-09 17:09:53.848915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 238.683 ms, result 0
00:20:31.006  [2024-12-09 17:09:53.849793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:31.006  [2024-12-09 17:09:53.860441] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:31.942  
[2024-12-09T17:09:55.922Z] Copying: 30/256 [MB] (30 MBps)
[2024-12-09T17:09:57.307Z] Copying: 44/256 [MB] (14 MBps)
[2024-12-09T17:09:58.251Z] Copying: 65/256 [MB] (21 MBps)
[2024-12-09T17:09:59.196Z] Copying: 81/256 [MB] (15 MBps)
[2024-12-09T17:10:00.185Z] Copying: 93/256 [MB] (12 MBps)
[2024-12-09T17:10:01.125Z] Copying: 112/256 [MB] (18 MBps)
[2024-12-09T17:10:02.066Z] Copying: 127/256 [MB] (15 MBps)
[2024-12-09T17:10:03.008Z] Copying: 148/256 [MB] (21 MBps)
[2024-12-09T17:10:03.947Z] Copying: 162/256 [MB] (13 MBps)
[2024-12-09T17:10:05.333Z] Copying: 173/256 [MB] (11 MBps)
[2024-12-09T17:10:05.905Z] Copying: 186/256 [MB] (13 MBps)
[2024-12-09T17:10:07.295Z] Copying: 199/256 [MB] (12 MBps)
[2024-12-09T17:10:08.239Z] Copying: 211/256 [MB] (11 MBps)
[2024-12-09T17:10:09.183Z] Copying: 227/256 [MB] (16 MBps)
[2024-12-09T17:10:10.125Z] Copying: 240/256 [MB] (12 MBps)
[2024-12-09T17:10:10.386Z] Copying: 252/256 [MB] (12 MBps)
[2024-12-09T17:10:10.648Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-09 17:10:10.535674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:47.607  [2024-12-09 17:10:10.546633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.546685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:20:47.607  [2024-12-09 17:10:10.546707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:20:47.607  [2024-12-09 17:10:10.546717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.546746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:20:47.607  [2024-12-09 17:10:10.550257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.550294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:20:47.607  [2024-12-09 17:10:10.550305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.497 ms
00:20:47.607  [2024-12-09 17:10:10.550314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.550604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.550615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:20:47.607  [2024-12-09 17:10:10.550624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.261 ms
00:20:47.607  [2024-12-09 17:10:10.550632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.555135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.555163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:20:47.607  [2024-12-09 17:10:10.555173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.483 ms
00:20:47.607  [2024-12-09 17:10:10.555181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.562428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.562461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:20:47.607  [2024-12-09 17:10:10.562472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.226 ms
00:20:47.607  [2024-12-09 17:10:10.562480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.587382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.587421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:20:47.607  [2024-12-09 17:10:10.587433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.831 ms
00:20:47.607  [2024-12-09 17:10:10.587441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.603138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.603177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:20:47.607  [2024-12-09 17:10:10.603192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.653 ms
00:20:47.607  [2024-12-09 17:10:10.603201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.607  [2024-12-09 17:10:10.603352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.607  [2024-12-09 17:10:10.603363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:20:47.607  [2024-12-09 17:10:10.603381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.096 ms
00:20:47.607  [2024-12-09 17:10:10.603389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.608  [2024-12-09 17:10:10.628049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.608  [2024-12-09 17:10:10.628085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:20:47.608  [2024-12-09 17:10:10.628096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.643 ms
00:20:47.608  [2024-12-09 17:10:10.628103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.870  [2024-12-09 17:10:10.652305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.870  [2024-12-09 17:10:10.652346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:20:47.870  [2024-12-09 17:10:10.652357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.163 ms
00:20:47.870  [2024-12-09 17:10:10.652364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.870  [2024-12-09 17:10:10.676080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.870  [2024-12-09 17:10:10.676124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:20:47.870  [2024-12-09 17:10:10.676139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.675 ms
00:20:47.870  [2024-12-09 17:10:10.676146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.870  [2024-12-09 17:10:10.700211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.870  [2024-12-09 17:10:10.700259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:20:47.870  [2024-12-09 17:10:10.700271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.990 ms
00:20:47.870  [2024-12-09 17:10:10.700280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.870  [2024-12-09 17:10:10.700328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:20:47.870  [2024-12-09 17:10:10.700345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.870  [2024-12-09 17:10:10.700692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.700999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:20:47.871  [2024-12-09 17:10:10.701193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:20:47.871  [2024-12-09 17:10:10.701202] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         fc2373b3-a810-4875-b935-5ccc0d51d98c
00:20:47.871  [2024-12-09 17:10:10.701211] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:20:47.871  [2024-12-09 17:10:10.701219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:20:47.871  [2024-12-09 17:10:10.701227] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:20:47.871  [2024-12-09 17:10:10.701236] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:20:47.871  [2024-12-09 17:10:10.701244] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:20:47.871  [2024-12-09 17:10:10.701253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:20:47.871  [2024-12-09 17:10:10.701264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:20:47.871  [2024-12-09 17:10:10.701271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:20:47.871  [2024-12-09 17:10:10.701277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:20:47.871  [2024-12-09 17:10:10.701285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.871  [2024-12-09 17:10:10.701293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:20:47.871  [2024-12-09 17:10:10.701302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.958 ms
00:20:47.871  [2024-12-09 17:10:10.701310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.871  [2024-12-09 17:10:10.715550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.871  [2024-12-09 17:10:10.715593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:20:47.871  [2024-12-09 17:10:10.715605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.206 ms
00:20:47.871  [2024-12-09 17:10:10.715613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.871  [2024-12-09 17:10:10.716084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:47.871  [2024-12-09 17:10:10.716095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:20:47.871  [2024-12-09 17:10:10.716105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.427 ms
00:20:47.871  [2024-12-09 17:10:10.716112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.871  [2024-12-09 17:10:10.756982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:47.871  [2024-12-09 17:10:10.757029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:47.871  [2024-12-09 17:10:10.757041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:47.871  [2024-12-09 17:10:10.757056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.871  [2024-12-09 17:10:10.757166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:47.871  [2024-12-09 17:10:10.757176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:47.871  [2024-12-09 17:10:10.757186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:47.871  [2024-12-09 17:10:10.757194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.871  [2024-12-09 17:10:10.757249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:47.871  [2024-12-09 17:10:10.757259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:47.871  [2024-12-09 17:10:10.757268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:47.871  [2024-12-09 17:10:10.757276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.871  [2024-12-09 17:10:10.757298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:47.871  [2024-12-09 17:10:10.757307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:47.871  [2024-12-09 17:10:10.757315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:47.872  [2024-12-09 17:10:10.757324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:47.872  [2024-12-09 17:10:10.848016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:47.872  [2024-12-09 17:10:10.848098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:47.872  [2024-12-09 17:10:10.848114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:47.872  [2024-12-09 17:10:10.848123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:48.133  [2024-12-09 17:10:10.922152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:48.133  [2024-12-09 17:10:10.922286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:48.133  [2024-12-09 17:10:10.922361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:48.133  [2024-12-09 17:10:10.922508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:20:48.133  [2024-12-09 17:10:10.922579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:48.133  [2024-12-09 17:10:10.922663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:48.133  [2024-12-09 17:10:10.922748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:48.133  [2024-12-09 17:10:10.922758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:48.133  [2024-12-09 17:10:10.922767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:48.133  [2024-12-09 17:10:10.922992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.345 ms, result 0
00:20:49.077  
00:20:49.077  
00:20:49.077   17:10:11 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:20:49.339  /home/vagrant/spdk_repo/spdk/test/ftl/data: OK
00:20:49.339   17:10:12 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT
00:20:49.339   17:10:12 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill
00:20:49.339   17:10:12 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:20:49.339   17:10:12 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:20:49.339   17:10:12 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern
00:20:49.600   17:10:12 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data
00:20:49.600   17:10:12 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78386
00:20:49.600   17:10:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78386 ']'
00:20:49.600   17:10:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78386
00:20:49.600  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78386) - No such process
00:20:49.600  Process with pid 78386 is not found
00:20:49.600   17:10:12 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78386 is not found'
00:20:49.600  ************************************
00:20:49.600  END TEST ftl_trim
00:20:49.600  ************************************
00:20:49.600  
00:20:49.600  real	1m23.214s
00:20:49.600  user	1m41.054s
00:20:49.600  sys	0m15.243s
00:20:49.600   17:10:12 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:49.600   17:10:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:20:49.600   17:10:12 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0
00:20:49.600   17:10:12 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:20:49.600   17:10:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:49.600   17:10:12 ftl -- common/autotest_common.sh@10 -- # set +x
00:20:49.600  ************************************
00:20:49.600  START TEST ftl_restore
00:20:49.600  ************************************
00:20:49.600   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0
00:20:49.600  * Looking for test storage...
00:20:49.600  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:20:49.600    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:49.600     17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:49.600     17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version
00:20:49.862    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-:
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-:
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<'
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1
00:20:49.862    17:10:12 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2
00:20:49.862     17:10:12 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:49.863     17:10:12 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2
00:20:49.863    17:10:12 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2
00:20:49.863    17:10:12 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:49.863    17:10:12 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:49.863    17:10:12 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0
00:20:49.863    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:49.863    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:49.863  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:49.863  		--rc genhtml_branch_coverage=1
00:20:49.863  		--rc genhtml_function_coverage=1
00:20:49.863  		--rc genhtml_legend=1
00:20:49.863  		--rc geninfo_all_blocks=1
00:20:49.863  		--rc geninfo_unexecuted_blocks=1
00:20:49.863  		
00:20:49.863  		'
00:20:49.863    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:49.863  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:49.863  		--rc genhtml_branch_coverage=1
00:20:49.863  		--rc genhtml_function_coverage=1
00:20:49.863  		--rc genhtml_legend=1
00:20:49.863  		--rc geninfo_all_blocks=1
00:20:49.863  		--rc geninfo_unexecuted_blocks=1
00:20:49.863  		
00:20:49.863  		'
00:20:49.863    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:49.863  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:49.863  		--rc genhtml_branch_coverage=1
00:20:49.863  		--rc genhtml_function_coverage=1
00:20:49.863  		--rc genhtml_legend=1
00:20:49.863  		--rc geninfo_all_blocks=1
00:20:49.863  		--rc geninfo_unexecuted_blocks=1
00:20:49.863  		
00:20:49.863  		'
00:20:49.863    17:10:12 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:49.863  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:49.863  		--rc genhtml_branch_coverage=1
00:20:49.863  		--rc genhtml_function_coverage=1
00:20:49.863  		--rc genhtml_legend=1
00:20:49.863  		--rc geninfo_all_blocks=1
00:20:49.863  		--rc geninfo_unexecuted_blocks=1
00:20:49.863  		
00:20:49.863  		'
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:20:49.863      17:10:12 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh
00:20:49.863     17:10:12 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:20:49.863     17:10:12 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid=
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:20:49.863    17:10:12 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d
00:20:49.863  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.1YLUFYRS18
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78710
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78710
00:20:49.863   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78710 ']'
00:20:49.863   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:49.863   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:49.863   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:49.863   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:49.863   17:10:12 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x
00:20:49.863   17:10:12 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:20:49.863  [2024-12-09 17:10:12.802187] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:20:49.863  [2024-12-09 17:10:12.802558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78710 ]
00:20:50.124  [2024-12-09 17:10:12.968183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:50.124  [2024-12-09 17:10:13.115596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:51.068   17:10:13 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:51.068   17:10:13 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0
00:20:51.068    17:10:13 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:20:51.068    17:10:13 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0
00:20:51.068    17:10:13 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:20:51.068    17:10:13 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424
00:20:51.068    17:10:13 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev
00:20:51.068     17:10:13 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:20:51.329    17:10:14 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:20:51.329    17:10:14 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size
00:20:51.329     17:10:14 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:20:51.329     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:20:51.329     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:20:51.329     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:20:51.329     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:20:51.329      17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:20:51.590     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:20:51.590    {
00:20:51.590      "name": "nvme0n1",
00:20:51.590      "aliases": [
00:20:51.590        "a17fbac8-c824-45d7-a162-454f23b38272"
00:20:51.590      ],
00:20:51.590      "product_name": "NVMe disk",
00:20:51.590      "block_size": 4096,
00:20:51.590      "num_blocks": 1310720,
00:20:51.590      "uuid": "a17fbac8-c824-45d7-a162-454f23b38272",
00:20:51.590      "numa_id": -1,
00:20:51.590      "assigned_rate_limits": {
00:20:51.590        "rw_ios_per_sec": 0,
00:20:51.590        "rw_mbytes_per_sec": 0,
00:20:51.590        "r_mbytes_per_sec": 0,
00:20:51.590        "w_mbytes_per_sec": 0
00:20:51.590      },
00:20:51.590      "claimed": true,
00:20:51.590      "claim_type": "read_many_write_one",
00:20:51.590      "zoned": false,
00:20:51.590      "supported_io_types": {
00:20:51.590        "read": true,
00:20:51.590        "write": true,
00:20:51.590        "unmap": true,
00:20:51.590        "flush": true,
00:20:51.590        "reset": true,
00:20:51.590        "nvme_admin": true,
00:20:51.590        "nvme_io": true,
00:20:51.590        "nvme_io_md": false,
00:20:51.590        "write_zeroes": true,
00:20:51.590        "zcopy": false,
00:20:51.590        "get_zone_info": false,
00:20:51.590        "zone_management": false,
00:20:51.590        "zone_append": false,
00:20:51.590        "compare": true,
00:20:51.590        "compare_and_write": false,
00:20:51.590        "abort": true,
00:20:51.590        "seek_hole": false,
00:20:51.590        "seek_data": false,
00:20:51.590        "copy": true,
00:20:51.590        "nvme_iov_md": false
00:20:51.590      },
00:20:51.590      "driver_specific": {
00:20:51.590        "nvme": [
00:20:51.590          {
00:20:51.590            "pci_address": "0000:00:11.0",
00:20:51.590            "trid": {
00:20:51.590              "trtype": "PCIe",
00:20:51.590              "traddr": "0000:00:11.0"
00:20:51.590            },
00:20:51.590            "ctrlr_data": {
00:20:51.590              "cntlid": 0,
00:20:51.590              "vendor_id": "0x1b36",
00:20:51.590              "model_number": "QEMU NVMe Ctrl",
00:20:51.590              "serial_number": "12341",
00:20:51.590              "firmware_revision": "8.0.0",
00:20:51.590              "subnqn": "nqn.2019-08.org.qemu:12341",
00:20:51.590              "oacs": {
00:20:51.590                "security": 0,
00:20:51.590                "format": 1,
00:20:51.590                "firmware": 0,
00:20:51.590                "ns_manage": 1
00:20:51.590              },
00:20:51.590              "multi_ctrlr": false,
00:20:51.590              "ana_reporting": false
00:20:51.590            },
00:20:51.590            "vs": {
00:20:51.590              "nvme_version": "1.4"
00:20:51.590            },
00:20:51.590            "ns_data": {
00:20:51.590              "id": 1,
00:20:51.590              "can_share": false
00:20:51.590            }
00:20:51.590          }
00:20:51.590        ],
00:20:51.590        "mp_policy": "active_passive"
00:20:51.590      }
00:20:51.590    }
00:20:51.590  ]'
00:20:51.590      17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:20:51.590     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:20:51.590      17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:20:51.590     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720
00:20:51.590     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:20:51.590     17:10:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120
00:20:51.590    17:10:14 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120
00:20:51.590    17:10:14 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:20:51.590    17:10:14 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols
00:20:51.590     17:10:14 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:20:51.590     17:10:14 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:20:51.852    17:10:14 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=15d4bce2-aacc-4bab-aa28-3868b9920eae
00:20:51.852    17:10:14 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores
00:20:51.852    17:10:14 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15d4bce2-aacc-4bab-aa28-3868b9920eae
00:20:52.155     17:10:14 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=145277a5-7295-4eb9-bce6-dd6f22e7caa9
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 145277a5-7295-4eb9-bce6-dd6f22e7caa9
00:20:52.440   17:10:15 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.440   17:10:15 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']'
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.440    17:10:15 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size=
00:20:52.440     17:10:15 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.440     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.440     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:20:52.440     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:20:52.440     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:20:52.440      17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.706     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:20:52.706    {
00:20:52.706      "name": "48b9ac47-fabe-489e-821b-4301fb391193",
00:20:52.706      "aliases": [
00:20:52.706        "lvs/nvme0n1p0"
00:20:52.706      ],
00:20:52.706      "product_name": "Logical Volume",
00:20:52.706      "block_size": 4096,
00:20:52.706      "num_blocks": 26476544,
00:20:52.706      "uuid": "48b9ac47-fabe-489e-821b-4301fb391193",
00:20:52.706      "assigned_rate_limits": {
00:20:52.706        "rw_ios_per_sec": 0,
00:20:52.706        "rw_mbytes_per_sec": 0,
00:20:52.706        "r_mbytes_per_sec": 0,
00:20:52.706        "w_mbytes_per_sec": 0
00:20:52.706      },
00:20:52.706      "claimed": false,
00:20:52.706      "zoned": false,
00:20:52.706      "supported_io_types": {
00:20:52.706        "read": true,
00:20:52.706        "write": true,
00:20:52.706        "unmap": true,
00:20:52.706        "flush": false,
00:20:52.706        "reset": true,
00:20:52.706        "nvme_admin": false,
00:20:52.706        "nvme_io": false,
00:20:52.706        "nvme_io_md": false,
00:20:52.706        "write_zeroes": true,
00:20:52.706        "zcopy": false,
00:20:52.706        "get_zone_info": false,
00:20:52.706        "zone_management": false,
00:20:52.706        "zone_append": false,
00:20:52.706        "compare": false,
00:20:52.706        "compare_and_write": false,
00:20:52.706        "abort": false,
00:20:52.706        "seek_hole": true,
00:20:52.706        "seek_data": true,
00:20:52.706        "copy": false,
00:20:52.706        "nvme_iov_md": false
00:20:52.706      },
00:20:52.706      "driver_specific": {
00:20:52.706        "lvol": {
00:20:52.706          "lvol_store_uuid": "145277a5-7295-4eb9-bce6-dd6f22e7caa9",
00:20:52.706          "base_bdev": "nvme0n1",
00:20:52.706          "thin_provision": true,
00:20:52.706          "num_allocated_clusters": 0,
00:20:52.706          "snapshot": false,
00:20:52.706          "clone": false,
00:20:52.706          "esnap_clone": false
00:20:52.706        }
00:20:52.706      }
00:20:52.706    }
00:20:52.706  ]'
00:20:52.706      17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:20:52.706     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:20:52.706      17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:20:52.706     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:20:52.706     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:20:52.706     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:20:52.706    17:10:15 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171
00:20:52.706    17:10:15 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev
00:20:52.706     17:10:15 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:20:52.967    17:10:15 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:20:52.967    17:10:15 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]]
00:20:52.967     17:10:15 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.967     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=48b9ac47-fabe-489e-821b-4301fb391193
00:20:52.967     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:20:52.967     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:20:52.967     17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:20:52.967      17:10:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48b9ac47-fabe-489e-821b-4301fb391193
00:20:53.229     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:20:53.229    {
00:20:53.229      "name": "48b9ac47-fabe-489e-821b-4301fb391193",
00:20:53.229      "aliases": [
00:20:53.229        "lvs/nvme0n1p0"
00:20:53.229      ],
00:20:53.229      "product_name": "Logical Volume",
00:20:53.229      "block_size": 4096,
00:20:53.229      "num_blocks": 26476544,
00:20:53.229      "uuid": "48b9ac47-fabe-489e-821b-4301fb391193",
00:20:53.229      "assigned_rate_limits": {
00:20:53.229        "rw_ios_per_sec": 0,
00:20:53.229        "rw_mbytes_per_sec": 0,
00:20:53.229        "r_mbytes_per_sec": 0,
00:20:53.229        "w_mbytes_per_sec": 0
00:20:53.229      },
00:20:53.229      "claimed": false,
00:20:53.229      "zoned": false,
00:20:53.229      "supported_io_types": {
00:20:53.229        "read": true,
00:20:53.229        "write": true,
00:20:53.229        "unmap": true,
00:20:53.229        "flush": false,
00:20:53.229        "reset": true,
00:20:53.229        "nvme_admin": false,
00:20:53.229        "nvme_io": false,
00:20:53.229        "nvme_io_md": false,
00:20:53.229        "write_zeroes": true,
00:20:53.229        "zcopy": false,
00:20:53.229        "get_zone_info": false,
00:20:53.229        "zone_management": false,
00:20:53.229        "zone_append": false,
00:20:53.229        "compare": false,
00:20:53.229        "compare_and_write": false,
00:20:53.229        "abort": false,
00:20:53.229        "seek_hole": true,
00:20:53.229        "seek_data": true,
00:20:53.229        "copy": false,
00:20:53.229        "nvme_iov_md": false
00:20:53.229      },
00:20:53.229      "driver_specific": {
00:20:53.229        "lvol": {
00:20:53.229          "lvol_store_uuid": "145277a5-7295-4eb9-bce6-dd6f22e7caa9",
00:20:53.229          "base_bdev": "nvme0n1",
00:20:53.229          "thin_provision": true,
00:20:53.229          "num_allocated_clusters": 0,
00:20:53.229          "snapshot": false,
00:20:53.229          "clone": false,
00:20:53.229          "esnap_clone": false
00:20:53.229        }
00:20:53.229      }
00:20:53.229    }
00:20:53.229  ]'
00:20:53.229      17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:20:53.229     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:20:53.229      17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:20:53.229     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:20:53.229     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:20:53.229     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:20:53.229    17:10:16 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171
00:20:53.229    17:10:16 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:20:53.491   17:10:16 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0
00:20:53.491    17:10:16 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 48b9ac47-fabe-489e-821b-4301fb391193
00:20:53.491    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=48b9ac47-fabe-489e-821b-4301fb391193
00:20:53.491    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:20:53.491    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:20:53.491    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:20:53.491     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48b9ac47-fabe-489e-821b-4301fb391193
00:20:53.752    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:20:53.752    {
00:20:53.752      "name": "48b9ac47-fabe-489e-821b-4301fb391193",
00:20:53.752      "aliases": [
00:20:53.752        "lvs/nvme0n1p0"
00:20:53.752      ],
00:20:53.752      "product_name": "Logical Volume",
00:20:53.752      "block_size": 4096,
00:20:53.752      "num_blocks": 26476544,
00:20:53.752      "uuid": "48b9ac47-fabe-489e-821b-4301fb391193",
00:20:53.752      "assigned_rate_limits": {
00:20:53.752        "rw_ios_per_sec": 0,
00:20:53.752        "rw_mbytes_per_sec": 0,
00:20:53.752        "r_mbytes_per_sec": 0,
00:20:53.752        "w_mbytes_per_sec": 0
00:20:53.752      },
00:20:53.752      "claimed": false,
00:20:53.752      "zoned": false,
00:20:53.752      "supported_io_types": {
00:20:53.752        "read": true,
00:20:53.752        "write": true,
00:20:53.752        "unmap": true,
00:20:53.752        "flush": false,
00:20:53.752        "reset": true,
00:20:53.752        "nvme_admin": false,
00:20:53.752        "nvme_io": false,
00:20:53.752        "nvme_io_md": false,
00:20:53.752        "write_zeroes": true,
00:20:53.752        "zcopy": false,
00:20:53.752        "get_zone_info": false,
00:20:53.752        "zone_management": false,
00:20:53.752        "zone_append": false,
00:20:53.752        "compare": false,
00:20:53.752        "compare_and_write": false,
00:20:53.752        "abort": false,
00:20:53.752        "seek_hole": true,
00:20:53.752        "seek_data": true,
00:20:53.752        "copy": false,
00:20:53.752        "nvme_iov_md": false
00:20:53.752      },
00:20:53.752      "driver_specific": {
00:20:53.752        "lvol": {
00:20:53.752          "lvol_store_uuid": "145277a5-7295-4eb9-bce6-dd6f22e7caa9",
00:20:53.752          "base_bdev": "nvme0n1",
00:20:53.752          "thin_provision": true,
00:20:53.752          "num_allocated_clusters": 0,
00:20:53.752          "snapshot": false,
00:20:53.752          "clone": false,
00:20:53.752          "esnap_clone": false
00:20:53.752        }
00:20:53.752      }
00:20:53.752    }
00:20:53.752  ]'
00:20:53.752     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:20:53.752    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:20:53.752     17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:20:53.752    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:20:53.752    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:20:53.752    17:10:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 48b9ac47-fabe-489e-821b-4301fb391193 --l2p_dram_limit 10'
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']'
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']'
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0'
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']'
00:20:53.752  /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected
00:20:53.752   17:10:16 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 48b9ac47-fabe-489e-821b-4301fb391193 --l2p_dram_limit 10 -c nvc0n1p0
00:20:54.015  [2024-12-09 17:10:16.857710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.857756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:20:54.015  [2024-12-09 17:10:16.857770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:20:54.015  [2024-12-09 17:10:16.857778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.857828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.857836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:54.015  [2024-12-09 17:10:16.857844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.036 ms
00:20:54.015  [2024-12-09 17:10:16.857863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.857884] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:20:54.015  [2024-12-09 17:10:16.858481] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:20:54.015  [2024-12-09 17:10:16.858504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.858511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:54.015  [2024-12-09 17:10:16.858519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.625 ms
00:20:54.015  [2024-12-09 17:10:16.858526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.858582] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef202dc6-70d9-47d6-9bf2-4a23092fd7e2
00:20:54.015  [2024-12-09 17:10:16.859947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.859987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:20:54.015  [2024-12-09 17:10:16.859996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:20:54.015  [2024-12-09 17:10:16.860004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.866987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.867108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:54.015  [2024-12-09 17:10:16.867121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.947 ms
00:20:54.015  [2024-12-09 17:10:16.867130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.867205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.867214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:54.015  [2024-12-09 17:10:16.867221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.057 ms
00:20:54.015  [2024-12-09 17:10:16.867232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.867273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.867284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:20:54.015  [2024-12-09 17:10:16.867292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:20:54.015  [2024-12-09 17:10:16.867300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.867318] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:20:54.015  [2024-12-09 17:10:16.870804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.870915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:54.015  [2024-12-09 17:10:16.870933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.490 ms
00:20:54.015  [2024-12-09 17:10:16.870940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.870975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.870981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:20:54.015  [2024-12-09 17:10:16.870989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:20:54.015  [2024-12-09 17:10:16.870995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.871011] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:20:54.015  [2024-12-09 17:10:16.871129] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:20:54.015  [2024-12-09 17:10:16.871142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:20:54.015  [2024-12-09 17:10:16.871151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:20:54.015  [2024-12-09 17:10:16.871161] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871168] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871176] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:20:54.015  [2024-12-09 17:10:16.871184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:20:54.015  [2024-12-09 17:10:16.871192] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:20:54.015  [2024-12-09 17:10:16.871198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:20:54.015  [2024-12-09 17:10:16.871206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.871218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:20:54.015  [2024-12-09 17:10:16.871225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.197 ms
00:20:54.015  [2024-12-09 17:10:16.871232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.871299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.015  [2024-12-09 17:10:16.871305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:20:54.015  [2024-12-09 17:10:16.871313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:20:54.015  [2024-12-09 17:10:16.871320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.015  [2024-12-09 17:10:16.871411] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:20:54.015  [2024-12-09 17:10:16.871420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:20:54.015  [2024-12-09 17:10:16.871429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:20:54.015  [2024-12-09 17:10:16.871449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:20:54.015  [2024-12-09 17:10:16.871468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:54.015  [2024-12-09 17:10:16.871481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:20:54.015  [2024-12-09 17:10:16.871487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:20:54.015  [2024-12-09 17:10:16.871495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:20:54.015  [2024-12-09 17:10:16.871501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:20:54.015  [2024-12-09 17:10:16.871507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:20:54.015  [2024-12-09 17:10:16.871512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:20:54.015  [2024-12-09 17:10:16.871528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:20:54.015  [2024-12-09 17:10:16.871547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:20:54.015  [2024-12-09 17:10:16.871564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:20:54.015  [2024-12-09 17:10:16.871582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:20:54.015  [2024-12-09 17:10:16.871598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:20:54.015  [2024-12-09 17:10:16.871604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:20:54.015  [2024-12-09 17:10:16.871609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:20:54.016  [2024-12-09 17:10:16.871617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:20:54.016  [2024-12-09 17:10:16.871622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:54.016  [2024-12-09 17:10:16.871628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:20:54.016  [2024-12-09 17:10:16.871634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:20:54.016  [2024-12-09 17:10:16.871640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:20:54.016  [2024-12-09 17:10:16.871646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:20:54.016  [2024-12-09 17:10:16.871652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:20:54.016  [2024-12-09 17:10:16.871657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:54.016  [2024-12-09 17:10:16.871663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:20:54.016  [2024-12-09 17:10:16.871668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:20:54.016  [2024-12-09 17:10:16.871674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:54.016  [2024-12-09 17:10:16.871679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:20:54.016  [2024-12-09 17:10:16.871688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:20:54.016  [2024-12-09 17:10:16.871694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:20:54.016  [2024-12-09 17:10:16.871701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:20:54.016  [2024-12-09 17:10:16.871707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:20:54.016  [2024-12-09 17:10:16.871718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:20:54.016  [2024-12-09 17:10:16.871724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:20:54.016  [2024-12-09 17:10:16.871731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:20:54.016  [2024-12-09 17:10:16.871736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:20:54.016  [2024-12-09 17:10:16.871743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:20:54.016  [2024-12-09 17:10:16.871749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:20:54.016  [2024-12-09 17:10:16.871760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:20:54.016  [2024-12-09 17:10:16.871773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:20:54.016  [2024-12-09 17:10:16.871779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:20:54.016  [2024-12-09 17:10:16.871786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:20:54.016  [2024-12-09 17:10:16.871791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:20:54.016  [2024-12-09 17:10:16.871798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:20:54.016  [2024-12-09 17:10:16.871804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:20:54.016  [2024-12-09 17:10:16.871812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:20:54.016  [2024-12-09 17:10:16.871817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:20:54.016  [2024-12-09 17:10:16.871825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:20:54.016  [2024-12-09 17:10:16.871869] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:20:54.016  [2024-12-09 17:10:16.871878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:20:54.016  [2024-12-09 17:10:16.871891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:20:54.016  [2024-12-09 17:10:16.871897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:20:54.016  [2024-12-09 17:10:16.871904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:20:54.016  [2024-12-09 17:10:16.871911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:54.016  [2024-12-09 17:10:16.871918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:20:54.016  [2024-12-09 17:10:16.871925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.554 ms
00:20:54.016  [2024-12-09 17:10:16.871933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:54.016  [2024-12-09 17:10:16.871963] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:20:54.016  [2024-12-09 17:10:16.871975] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:20:58.229  [2024-12-09 17:10:21.123533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.123839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:20:58.229  [2024-12-09 17:10:21.123877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4251.552 ms
00:20:58.229  [2024-12-09 17:10:21.123889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.154279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.154447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:58.229  [2024-12-09 17:10:21.154466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.164 ms
00:20:58.229  [2024-12-09 17:10:21.154477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.154608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.154622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:20:58.229  [2024-12-09 17:10:21.154636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:20:58.229  [2024-12-09 17:10:21.154651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.189160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.189307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:58.229  [2024-12-09 17:10:21.189324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.472 ms
00:20:58.229  [2024-12-09 17:10:21.189335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.189372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.189382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:58.229  [2024-12-09 17:10:21.189391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:20:58.229  [2024-12-09 17:10:21.189590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.190119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.190144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:58.229  [2024-12-09 17:10:21.190155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.482 ms
00:20:58.229  [2024-12-09 17:10:21.190165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.190274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.190288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:58.229  [2024-12-09 17:10:21.190298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.088 ms
00:20:58.229  [2024-12-09 17:10:21.190310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.207261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.207301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:58.229  [2024-12-09 17:10:21.207312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.933 ms
00:20:58.229  [2024-12-09 17:10:21.207322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.229  [2024-12-09 17:10:21.232724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:20:58.229  [2024-12-09 17:10:21.236519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.229  [2024-12-09 17:10:21.236673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:20:58.229  [2024-12-09 17:10:21.236696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.098 ms
00:20:58.229  [2024-12-09 17:10:21.236705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.329205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.329258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:20:58.491  [2024-12-09 17:10:21.329274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 92.459 ms
00:20:58.491  [2024-12-09 17:10:21.329282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.329493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.329505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:20:58.491  [2024-12-09 17:10:21.329520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.160 ms
00:20:58.491  [2024-12-09 17:10:21.329528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.354833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.355055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:20:58.491  [2024-12-09 17:10:21.355082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.250 ms
00:20:58.491  [2024-12-09 17:10:21.355095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.379971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.380023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:20:58.491  [2024-12-09 17:10:21.380040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.800 ms
00:20:58.491  [2024-12-09 17:10:21.380049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.380723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.380744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:20:58.491  [2024-12-09 17:10:21.380761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.619 ms
00:20:58.491  [2024-12-09 17:10:21.380769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.474568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.474798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:20:58.491  [2024-12-09 17:10:21.474836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 93.744 ms
00:20:58.491  [2024-12-09 17:10:21.474867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.491  [2024-12-09 17:10:21.503537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.491  [2024-12-09 17:10:21.503593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:20:58.491  [2024-12-09 17:10:21.503611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.567 ms
00:20:58.491  [2024-12-09 17:10:21.503620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.752  [2024-12-09 17:10:21.529900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.752  [2024-12-09 17:10:21.529952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:20:58.752  [2024-12-09 17:10:21.529968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.219 ms
00:20:58.752  [2024-12-09 17:10:21.529976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.752  [2024-12-09 17:10:21.556465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.752  [2024-12-09 17:10:21.556541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:20:58.752  [2024-12-09 17:10:21.556560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.431 ms
00:20:58.752  [2024-12-09 17:10:21.556567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.752  [2024-12-09 17:10:21.556628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.752  [2024-12-09 17:10:21.556639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:20:58.752  [2024-12-09 17:10:21.556654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:20:58.752  [2024-12-09 17:10:21.556662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.752  [2024-12-09 17:10:21.556765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:58.752  [2024-12-09 17:10:21.556780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:20:58.752  [2024-12-09 17:10:21.556792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.042 ms
00:20:58.752  [2024-12-09 17:10:21.556800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:58.752  [2024-12-09 17:10:21.558205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4699.873 ms, result 0
00:20:58.752  {
00:20:58.752    "name": "ftl0",
00:20:58.752    "uuid": "ef202dc6-70d9-47d6-9bf2-4a23092fd7e2"
00:20:58.752  }
00:20:58.752   17:10:21 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": ['
00:20:58.752   17:10:21 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:59.012   17:10:21 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}'
00:20:59.012   17:10:21 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:20:59.012  [2024-12-09 17:10:22.025525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.012  [2024-12-09 17:10:22.025603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:20:59.013  [2024-12-09 17:10:22.025620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:20:59.013  [2024-12-09 17:10:22.025633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.013  [2024-12-09 17:10:22.025662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:20:59.013  [2024-12-09 17:10:22.029110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.013  [2024-12-09 17:10:22.029153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:20:59.013  [2024-12-09 17:10:22.029170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.423 ms
00:20:59.013  [2024-12-09 17:10:22.029179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.013  [2024-12-09 17:10:22.029514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.013  [2024-12-09 17:10:22.029526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:20:59.013  [2024-12-09 17:10:22.029540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.292 ms
00:20:59.013  [2024-12-09 17:10:22.029548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.013  [2024-12-09 17:10:22.032812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.013  [2024-12-09 17:10:22.033005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:20:59.013  [2024-12-09 17:10:22.033030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.243 ms
00:20:59.013  [2024-12-09 17:10:22.033040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.013  [2024-12-09 17:10:22.039358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.013  [2024-12-09 17:10:22.039407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:20:59.013  [2024-12-09 17:10:22.039423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.285 ms
00:20:59.013  [2024-12-09 17:10:22.039431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.066888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.067091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:20:59.274  [2024-12-09 17:10:22.067121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.371 ms
00:20:59.274  [2024-12-09 17:10:22.067129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.084969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.085020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:20:59.274  [2024-12-09 17:10:22.085037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.781 ms
00:20:59.274  [2024-12-09 17:10:22.085046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.085231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.085245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:20:59.274  [2024-12-09 17:10:22.085258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.128 ms
00:20:59.274  [2024-12-09 17:10:22.085271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.111428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.111476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:20:59.274  [2024-12-09 17:10:22.111492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.134 ms
00:20:59.274  [2024-12-09 17:10:22.111500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.137025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.137075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:20:59.274  [2024-12-09 17:10:22.137090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.469 ms
00:20:59.274  [2024-12-09 17:10:22.137097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.162008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.162056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:20:59.274  [2024-12-09 17:10:22.162071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.852 ms
00:20:59.274  [2024-12-09 17:10:22.162078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.186745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.274  [2024-12-09 17:10:22.186807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:20:59.274  [2024-12-09 17:10:22.186822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.560 ms
00:20:59.274  [2024-12-09 17:10:22.186829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.274  [2024-12-09 17:10:22.186900] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:20:59.274  [2024-12-09 17:10:22.186924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.274  [2024-12-09 17:10:22.186938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.274  [2024-12-09 17:10:22.186946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.274  [2024-12-09 17:10:22.186960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.186969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.186981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.186990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.275  [2024-12-09 17:10:22.187860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.276  [2024-12-09 17:10:22.187870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.276  [2024-12-09 17:10:22.187885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.276  [2024-12-09 17:10:22.187893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.276  [2024-12-09 17:10:22.187905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:20:59.276  [2024-12-09 17:10:22.187921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:20:59.276  [2024-12-09 17:10:22.187933] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         ef202dc6-70d9-47d6-9bf2-4a23092fd7e2
00:20:59.276  [2024-12-09 17:10:22.187944] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:20:59.276  [2024-12-09 17:10:22.187962] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:20:59.276  [2024-12-09 17:10:22.187970] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:20:59.276  [2024-12-09 17:10:22.187981] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:20:59.276  [2024-12-09 17:10:22.187988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:20:59.276  [2024-12-09 17:10:22.187999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:20:59.276  [2024-12-09 17:10:22.188006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:20:59.276  [2024-12-09 17:10:22.188016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:20:59.276  [2024-12-09 17:10:22.188023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:20:59.276  [2024-12-09 17:10:22.188036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.276  [2024-12-09 17:10:22.188045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:20:59.276  [2024-12-09 17:10:22.188056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.138 ms
00:20:59.276  [2024-12-09 17:10:22.188067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.276  [2024-12-09 17:10:22.202537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.276  [2024-12-09 17:10:22.202583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:20:59.276  [2024-12-09 17:10:22.202600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.419 ms
00:20:59.276  [2024-12-09 17:10:22.202608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.276  [2024-12-09 17:10:22.203087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:20:59.276  [2024-12-09 17:10:22.203104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:20:59.276  [2024-12-09 17:10:22.203118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.430 ms
00:20:59.276  [2024-12-09 17:10:22.203127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.276  [2024-12-09 17:10:22.253391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.276  [2024-12-09 17:10:22.253608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:20:59.276  [2024-12-09 17:10:22.253636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.276  [2024-12-09 17:10:22.253645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.276  [2024-12-09 17:10:22.253732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.276  [2024-12-09 17:10:22.253744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:20:59.276  [2024-12-09 17:10:22.253756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.276  [2024-12-09 17:10:22.253765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.276  [2024-12-09 17:10:22.253916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.276  [2024-12-09 17:10:22.253929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:20:59.276  [2024-12-09 17:10:22.253941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.276  [2024-12-09 17:10:22.253949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.276  [2024-12-09 17:10:22.253975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.276  [2024-12-09 17:10:22.253984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:20:59.276  [2024-12-09 17:10:22.253998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.276  [2024-12-09 17:10:22.254006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.346872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.346937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:20:59.537  [2024-12-09 17:10:22.346954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.346964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.422206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.422271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:20:59.537  [2024-12-09 17:10:22.422293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.422302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.422412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.422425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:20:59.537  [2024-12-09 17:10:22.422437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.422446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.422524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.422535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:20:59.537  [2024-12-09 17:10:22.422548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.422561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.422689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.422700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:20:59.537  [2024-12-09 17:10:22.422713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.422721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.422773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.422784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:20:59.537  [2024-12-09 17:10:22.422795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.422804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.422904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.422916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:20:59.537  [2024-12-09 17:10:22.422928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.422937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.423008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:20:59.537  [2024-12-09 17:10:22.423019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:20:59.537  [2024-12-09 17:10:22.423030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:20:59.537  [2024-12-09 17:10:22.423041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:20:59.537  [2024-12-09 17:10:22.423231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.644 ms, result 0
00:20:59.537  true
00:20:59.537   17:10:22 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78710
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78710 ']'
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78710
00:20:59.537    17:10:22 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:59.537    17:10:22 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78710
00:20:59.537  killing process with pid 78710
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78710'
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78710
00:20:59.537   17:10:22 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78710
00:21:02.850   17:10:25 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K
00:21:07.057  262144+0 records in
00:21:07.057  262144+0 records out
00:21:07.057  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.9529 s, 272 MB/s
00:21:07.057   17:10:29 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:21:08.443   17:10:31 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:21:08.443  [2024-12-09 17:10:31.208328] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:21:08.443  [2024-12-09 17:10:31.208446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78950 ]
00:21:08.443  [2024-12-09 17:10:31.363999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:08.443  [2024-12-09 17:10:31.479728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:09.018  [2024-12-09 17:10:31.783646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:21:09.018  [2024-12-09 17:10:31.783746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:21:09.018  [2024-12-09 17:10:31.948894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.948965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:21:09.019  [2024-12-09 17:10:31.948983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:21:09.019  [2024-12-09 17:10:31.948993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.949059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.949074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:21:09.019  [2024-12-09 17:10:31.949083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.044 ms
00:21:09.019  [2024-12-09 17:10:31.949092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.949115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:21:09.019  [2024-12-09 17:10:31.949832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:21:09.019  [2024-12-09 17:10:31.949894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.949904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:21:09.019  [2024-12-09 17:10:31.949915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.786 ms
00:21:09.019  [2024-12-09 17:10:31.949923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.952202] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:21:09.019  [2024-12-09 17:10:31.967912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.967967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:21:09.019  [2024-12-09 17:10:31.967983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.712 ms
00:21:09.019  [2024-12-09 17:10:31.967993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.968089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.968099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:21:09.019  [2024-12-09 17:10:31.968109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:21:09.019  [2024-12-09 17:10:31.968118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.979940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.979986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:21:09.019  [2024-12-09 17:10:31.980000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.739 ms
00:21:09.019  [2024-12-09 17:10:31.980016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.980105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.980115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:21:09.019  [2024-12-09 17:10:31.980125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.064 ms
00:21:09.019  [2024-12-09 17:10:31.980132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.980197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.980209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:21:09.019  [2024-12-09 17:10:31.980217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:21:09.019  [2024-12-09 17:10:31.980225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.980253] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:21:09.019  [2024-12-09 17:10:31.984973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.985018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:21:09.019  [2024-12-09 17:10:31.985034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.726 ms
00:21:09.019  [2024-12-09 17:10:31.985042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.985090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.985100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:21:09.019  [2024-12-09 17:10:31.985109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.020 ms
00:21:09.019  [2024-12-09 17:10:31.985118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.985156] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:21:09.019  [2024-12-09 17:10:31.985184] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:21:09.019  [2024-12-09 17:10:31.985225] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:21:09.019  [2024-12-09 17:10:31.985246] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:21:09.019  [2024-12-09 17:10:31.985359] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:21:09.019  [2024-12-09 17:10:31.985371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:21:09.019  [2024-12-09 17:10:31.985383] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:21:09.019  [2024-12-09 17:10:31.985394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985414] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:21:09.019  [2024-12-09 17:10:31.985422] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:21:09.019  [2024-12-09 17:10:31.985433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:21:09.019  [2024-12-09 17:10:31.985448] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:21:09.019  [2024-12-09 17:10:31.985457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.985466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:21:09.019  [2024-12-09 17:10:31.985474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.304 ms
00:21:09.019  [2024-12-09 17:10:31.985482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.985565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.019  [2024-12-09 17:10:31.985574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:21:09.019  [2024-12-09 17:10:31.985582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:21:09.019  [2024-12-09 17:10:31.985589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.019  [2024-12-09 17:10:31.985697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:21:09.019  [2024-12-09 17:10:31.985709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:21:09.019  [2024-12-09 17:10:31.985718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:21:09.019  [2024-12-09 17:10:31.985742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:21:09.019  [2024-12-09 17:10:31.985766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:21:09.019  [2024-12-09 17:10:31.985782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:21:09.019  [2024-12-09 17:10:31.985789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:21:09.019  [2024-12-09 17:10:31.985796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:21:09.019  [2024-12-09 17:10:31.985811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:21:09.019  [2024-12-09 17:10:31.985820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:21:09.019  [2024-12-09 17:10:31.985828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:21:09.019  [2024-12-09 17:10:31.985843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:21:09.019  [2024-12-09 17:10:31.985896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:21:09.019  [2024-12-09 17:10:31.985918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:21:09.019  [2024-12-09 17:10:31.985939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:21:09.019  [2024-12-09 17:10:31.985961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:09.019  [2024-12-09 17:10:31.985976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:21:09.019  [2024-12-09 17:10:31.985984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:21:09.019  [2024-12-09 17:10:31.985992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:21:09.019  [2024-12-09 17:10:31.985999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:21:09.019  [2024-12-09 17:10:31.986006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:21:09.019  [2024-12-09 17:10:31.986013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:21:09.019  [2024-12-09 17:10:31.986020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:21:09.019  [2024-12-09 17:10:31.986027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:21:09.019  [2024-12-09 17:10:31.986033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:09.019  [2024-12-09 17:10:31.986041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:21:09.019  [2024-12-09 17:10:31.986048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:21:09.020  [2024-12-09 17:10:31.986056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:09.020  [2024-12-09 17:10:31.986064] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:21:09.020  [2024-12-09 17:10:31.986073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:21:09.020  [2024-12-09 17:10:31.986081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:21:09.020  [2024-12-09 17:10:31.986090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:09.020  [2024-12-09 17:10:31.986100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:21:09.020  [2024-12-09 17:10:31.986108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:21:09.020  [2024-12-09 17:10:31.986115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:21:09.020  [2024-12-09 17:10:31.986123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:21:09.020  [2024-12-09 17:10:31.986129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:21:09.020  [2024-12-09 17:10:31.986136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:21:09.020  [2024-12-09 17:10:31.986146] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:21:09.020  [2024-12-09 17:10:31.986156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:21:09.020  [2024-12-09 17:10:31.986184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:21:09.020  [2024-12-09 17:10:31.986192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:21:09.020  [2024-12-09 17:10:31.986199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:21:09.020  [2024-12-09 17:10:31.986206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:21:09.020  [2024-12-09 17:10:31.986213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:21:09.020  [2024-12-09 17:10:31.986221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:21:09.020  [2024-12-09 17:10:31.986228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:21:09.020  [2024-12-09 17:10:31.986236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:21:09.020  [2024-12-09 17:10:31.986244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:21:09.020  [2024-12-09 17:10:31.986284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:21:09.020  [2024-12-09 17:10:31.986292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:21:09.020  [2024-12-09 17:10:31.986308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:21:09.020  [2024-12-09 17:10:31.986315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:21:09.020  [2024-12-09 17:10:31.986322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:21:09.020  [2024-12-09 17:10:31.986329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.020  [2024-12-09 17:10:31.986338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:21:09.020  [2024-12-09 17:10:31.986346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.702 ms
00:21:09.020  [2024-12-09 17:10:31.986356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.020  [2024-12-09 17:10:32.025002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.020  [2024-12-09 17:10:32.025061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:21:09.020  [2024-12-09 17:10:32.025079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.597 ms
00:21:09.020  [2024-12-09 17:10:32.025089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.020  [2024-12-09 17:10:32.025192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.020  [2024-12-09 17:10:32.025202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:21:09.020  [2024-12-09 17:10:32.025211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:21:09.020  [2024-12-09 17:10:32.025223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.077490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.077729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:21:09.283  [2024-12-09 17:10:32.077755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 52.201 ms
00:21:09.283  [2024-12-09 17:10:32.077764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.077822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.077841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:21:09.283  [2024-12-09 17:10:32.077878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:21:09.283  [2024-12-09 17:10:32.077887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.078667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.078716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:21:09.283  [2024-12-09 17:10:32.078728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.694 ms
00:21:09.283  [2024-12-09 17:10:32.078736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.078930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.078948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:21:09.283  [2024-12-09 17:10:32.078958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.157 ms
00:21:09.283  [2024-12-09 17:10:32.078967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.097512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.097567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:21:09.283  [2024-12-09 17:10:32.097580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.524 ms
00:21:09.283  [2024-12-09 17:10:32.097589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.113051] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4
00:21:09.283  [2024-12-09 17:10:32.113109] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:21:09.283  [2024-12-09 17:10:32.113125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.113135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:21:09.283  [2024-12-09 17:10:32.113145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.410 ms
00:21:09.283  [2024-12-09 17:10:32.113152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.139478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.139533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:21:09.283  [2024-12-09 17:10:32.139546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.265 ms
00:21:09.283  [2024-12-09 17:10:32.139554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.152640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.152694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:21:09.283  [2024-12-09 17:10:32.152708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.023 ms
00:21:09.283  [2024-12-09 17:10:32.152715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.165737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.165789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:21:09.283  [2024-12-09 17:10:32.165802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.971 ms
00:21:09.283  [2024-12-09 17:10:32.165809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.166487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.166516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:21:09.283  [2024-12-09 17:10:32.166531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.539 ms
00:21:09.283  [2024-12-09 17:10:32.166539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.240223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.240297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:21:09.283  [2024-12-09 17:10:32.240324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 73.663 ms
00:21:09.283  [2024-12-09 17:10:32.240334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.252274] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:21:09.283  [2024-12-09 17:10:32.255976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.256026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:21:09.283  [2024-12-09 17:10:32.256041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.575 ms
00:21:09.283  [2024-12-09 17:10:32.256049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.256143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.256156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:21:09.283  [2024-12-09 17:10:32.256167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:21:09.283  [2024-12-09 17:10:32.256180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.256263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.256276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:21:09.283  [2024-12-09 17:10:32.256286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:21:09.283  [2024-12-09 17:10:32.256295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.256319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.256329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:21:09.283  [2024-12-09 17:10:32.256338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:21:09.283  [2024-12-09 17:10:32.256347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.256395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:21:09.283  [2024-12-09 17:10:32.256409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.256419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:21:09.283  [2024-12-09 17:10:32.256428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.015 ms
00:21:09.283  [2024-12-09 17:10:32.256437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.283784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.284026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:21:09.283  [2024-12-09 17:10:32.284052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.327 ms
00:21:09.283  [2024-12-09 17:10:32.284070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.284158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:09.283  [2024-12-09 17:10:32.284170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:21:09.283  [2024-12-09 17:10:32.284180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.043 ms
00:21:09.283  [2024-12-09 17:10:32.284188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:09.283  [2024-12-09 17:10:32.285981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.493 ms, result 0
00:21:10.285  
[2024-12-09T17:10:34.714Z] Copying: 11/1024 [MB] (11 MBps)
[2024-12-09T17:10:35.655Z] Copying: 25/1024 [MB] (13 MBps)
[2024-12-09T17:10:36.599Z] Copying: 43/1024 [MB] (18 MBps)
[2024-12-09T17:10:37.543Z] Copying: 66/1024 [MB] (23 MBps)
[2024-12-09T17:10:38.487Z] Copying: 88/1024 [MB] (22 MBps)
[2024-12-09T17:10:39.431Z] Copying: 103/1024 [MB] (14 MBps)
[2024-12-09T17:10:40.376Z] Copying: 116024/1048576 [kB] (9704 kBps)
[2024-12-09T17:10:41.320Z] Copying: 124/1024 [MB] (11 MBps)
[2024-12-09T17:10:42.709Z] Copying: 145/1024 [MB] (20 MBps)
[2024-12-09T17:10:43.654Z] Copying: 158904/1048576 [kB] (9832 kBps)
[2024-12-09T17:10:44.598Z] Copying: 166/1024 [MB] (11 MBps)
[2024-12-09T17:10:45.542Z] Copying: 178/1024 [MB] (11 MBps)
[2024-12-09T17:10:46.487Z] Copying: 190/1024 [MB] (12 MBps)
[2024-12-09T17:10:47.432Z] Copying: 203/1024 [MB] (13 MBps)
[2024-12-09T17:10:48.377Z] Copying: 221/1024 [MB] (17 MBps)
[2024-12-09T17:10:49.393Z] Copying: 233/1024 [MB] (12 MBps)
[2024-12-09T17:10:50.338Z] Copying: 252/1024 [MB] (19 MBps)
[2024-12-09T17:10:51.727Z] Copying: 266/1024 [MB] (13 MBps)
[2024-12-09T17:10:52.301Z] Copying: 279/1024 [MB] (13 MBps)
[2024-12-09T17:10:53.689Z] Copying: 290/1024 [MB] (10 MBps)
[2024-12-09T17:10:54.634Z] Copying: 302/1024 [MB] (12 MBps)
[2024-12-09T17:10:55.576Z] Copying: 313/1024 [MB] (10 MBps)
[2024-12-09T17:10:56.519Z] Copying: 327/1024 [MB] (14 MBps)
[2024-12-09T17:10:57.463Z] Copying: 341/1024 [MB] (13 MBps)
[2024-12-09T17:10:58.405Z] Copying: 358/1024 [MB] (17 MBps)
[2024-12-09T17:10:59.347Z] Copying: 379/1024 [MB] (21 MBps)
[2024-12-09T17:11:00.735Z] Copying: 401/1024 [MB] (21 MBps)
[2024-12-09T17:11:01.307Z] Copying: 418/1024 [MB] (16 MBps)
[2024-12-09T17:11:02.695Z] Copying: 430/1024 [MB] (12 MBps)
[2024-12-09T17:11:03.639Z] Copying: 442/1024 [MB] (11 MBps)
[2024-12-09T17:11:04.614Z] Copying: 457/1024 [MB] (14 MBps)
[2024-12-09T17:11:05.562Z] Copying: 467/1024 [MB] (10 MBps)
[2024-12-09T17:11:06.506Z] Copying: 478/1024 [MB] (10 MBps)
[2024-12-09T17:11:07.448Z] Copying: 490/1024 [MB] (12 MBps)
[2024-12-09T17:11:08.390Z] Copying: 508/1024 [MB] (17 MBps)
[2024-12-09T17:11:09.335Z] Copying: 521/1024 [MB] (12 MBps)
[2024-12-09T17:11:10.724Z] Copying: 533/1024 [MB] (11 MBps)
[2024-12-09T17:11:11.668Z] Copying: 545/1024 [MB] (12 MBps)
[2024-12-09T17:11:12.613Z] Copying: 560/1024 [MB] (14 MBps)
[2024-12-09T17:11:13.556Z] Copying: 570/1024 [MB] (10 MBps)
[2024-12-09T17:11:14.502Z] Copying: 586/1024 [MB] (15 MBps)
[2024-12-09T17:11:15.446Z] Copying: 602/1024 [MB] (16 MBps)
[2024-12-09T17:11:16.390Z] Copying: 612/1024 [MB] (10 MBps)
[2024-12-09T17:11:17.333Z] Copying: 633/1024 [MB] (20 MBps)
[2024-12-09T17:11:18.724Z] Copying: 649/1024 [MB] (16 MBps)
[2024-12-09T17:11:19.669Z] Copying: 662/1024 [MB] (12 MBps)
[2024-12-09T17:11:20.631Z] Copying: 679/1024 [MB] (16 MBps)
[2024-12-09T17:11:21.578Z] Copying: 694/1024 [MB] (14 MBps)
[2024-12-09T17:11:22.522Z] Copying: 712/1024 [MB] (17 MBps)
[2024-12-09T17:11:23.464Z] Copying: 732/1024 [MB] (20 MBps)
[2024-12-09T17:11:24.408Z] Copying: 746/1024 [MB] (13 MBps)
[2024-12-09T17:11:25.354Z] Copying: 773356/1048576 [kB] (9332 kBps)
[2024-12-09T17:11:26.742Z] Copying: 772/1024 [MB] (16 MBps)
[2024-12-09T17:11:27.315Z] Copying: 784/1024 [MB] (12 MBps)
[2024-12-09T17:11:28.703Z] Copying: 801/1024 [MB] (16 MBps)
[2024-12-09T17:11:29.646Z] Copying: 815/1024 [MB] (14 MBps)
[2024-12-09T17:11:30.590Z] Copying: 835/1024 [MB] (19 MBps)
[2024-12-09T17:11:31.536Z] Copying: 853/1024 [MB] (18 MBps)
[2024-12-09T17:11:32.481Z] Copying: 866/1024 [MB] (13 MBps)
[2024-12-09T17:11:33.426Z] Copying: 877/1024 [MB] (10 MBps)
[2024-12-09T17:11:34.371Z] Copying: 890/1024 [MB] (13 MBps)
[2024-12-09T17:11:35.314Z] Copying: 905/1024 [MB] (14 MBps)
[2024-12-09T17:11:36.718Z] Copying: 915/1024 [MB] (10 MBps)
[2024-12-09T17:11:37.661Z] Copying: 929/1024 [MB] (13 MBps)
[2024-12-09T17:11:38.606Z] Copying: 945/1024 [MB] (16 MBps)
[2024-12-09T17:11:39.550Z] Copying: 964/1024 [MB] (18 MBps)
[2024-12-09T17:11:40.495Z] Copying: 979/1024 [MB] (14 MBps)
[2024-12-09T17:11:41.440Z] Copying: 991/1024 [MB] (11 MBps)
[2024-12-09T17:11:42.385Z] Copying: 1005/1024 [MB] (14 MBps)
[2024-12-09T17:11:42.647Z] Copying: 1021/1024 [MB] (16 MBps)
[2024-12-09T17:11:42.647Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-09 17:11:42.515674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.606  [2024-12-09 17:11:42.515714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:22:19.606  [2024-12-09 17:11:42.515727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:22:19.606  [2024-12-09 17:11:42.515733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.606  [2024-12-09 17:11:42.515750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:22:19.606  [2024-12-09 17:11:42.518080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.606  [2024-12-09 17:11:42.518109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:22:19.606  [2024-12-09 17:11:42.518123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.318 ms
00:22:19.606  [2024-12-09 17:11:42.518130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.606  [2024-12-09 17:11:42.520873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.606  [2024-12-09 17:11:42.520899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:22:19.606  [2024-12-09 17:11:42.520906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.726 ms
00:22:19.606  [2024-12-09 17:11:42.520912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.606  [2024-12-09 17:11:42.536501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.606  [2024-12-09 17:11:42.536530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:22:19.606  [2024-12-09 17:11:42.536539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.577 ms
00:22:19.606  [2024-12-09 17:11:42.536549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.606  [2024-12-09 17:11:42.541313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.541336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:22:19.607  [2024-12-09 17:11:42.541345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.740 ms
00:22:19.607  [2024-12-09 17:11:42.541352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.561160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.561190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:22:19.607  [2024-12-09 17:11:42.561198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.776 ms
00:22:19.607  [2024-12-09 17:11:42.561204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.573416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.573443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:22:19.607  [2024-12-09 17:11:42.573453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.184 ms
00:22:19.607  [2024-12-09 17:11:42.573460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.573555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.573562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:22:19.607  [2024-12-09 17:11:42.573569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.063 ms
00:22:19.607  [2024-12-09 17:11:42.573575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.591555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.591682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:22:19.607  [2024-12-09 17:11:42.591696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.969 ms
00:22:19.607  [2024-12-09 17:11:42.591702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.609998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.610108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:22:19.607  [2024-12-09 17:11:42.610120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.271 ms
00:22:19.607  [2024-12-09 17:11:42.610125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.627518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.607  [2024-12-09 17:11:42.627543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:22:19.607  [2024-12-09 17:11:42.627550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.370 ms
00:22:19.607  [2024-12-09 17:11:42.627556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.607  [2024-12-09 17:11:42.644978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.870  [2024-12-09 17:11:42.645082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:22:19.870  [2024-12-09 17:11:42.645094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.379 ms
00:22:19.870  [2024-12-09 17:11:42.645099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.870  [2024-12-09 17:11:42.645122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:22:19.870  [2024-12-09 17:11:42.645137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.870  [2024-12-09 17:11:42.645388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:22:19.871  [2024-12-09 17:11:42.645722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:22:19.871  [2024-12-09 17:11:42.645728] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         ef202dc6-70d9-47d6-9bf2-4a23092fd7e2
00:22:19.871  [2024-12-09 17:11:42.645734] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:22:19.871  [2024-12-09 17:11:42.645740] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:22:19.871  [2024-12-09 17:11:42.645746] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:22:19.871  [2024-12-09 17:11:42.645752] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:22:19.871  [2024-12-09 17:11:42.645757] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:22:19.871  [2024-12-09 17:11:42.645769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:22:19.871  [2024-12-09 17:11:42.645775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:22:19.871  [2024-12-09 17:11:42.645781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:22:19.871  [2024-12-09 17:11:42.645786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:22:19.871  [2024-12-09 17:11:42.645792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.871  [2024-12-09 17:11:42.645798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:22:19.871  [2024-12-09 17:11:42.645804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.671 ms
00:22:19.871  [2024-12-09 17:11:42.645812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.655828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.871  [2024-12-09 17:11:42.655865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:22:19.871  [2024-12-09 17:11:42.655873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.004 ms
00:22:19.871  [2024-12-09 17:11:42.655879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.656167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:19.871  [2024-12-09 17:11:42.656249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:22:19.871  [2024-12-09 17:11:42.656263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.275 ms
00:22:19.871  [2024-12-09 17:11:42.656269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.683878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.871  [2024-12-09 17:11:42.683906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:22:19.871  [2024-12-09 17:11:42.683915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.871  [2024-12-09 17:11:42.683921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.683970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.871  [2024-12-09 17:11:42.683976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:22:19.871  [2024-12-09 17:11:42.683985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.871  [2024-12-09 17:11:42.683992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.684045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.871  [2024-12-09 17:11:42.684052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:22:19.871  [2024-12-09 17:11:42.684059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.871  [2024-12-09 17:11:42.684065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.684077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.871  [2024-12-09 17:11:42.684083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:22:19.871  [2024-12-09 17:11:42.684089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.871  [2024-12-09 17:11:42.684097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.747964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.871  [2024-12-09 17:11:42.747999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:22:19.871  [2024-12-09 17:11:42.748015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.871  [2024-12-09 17:11:42.748023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.871  [2024-12-09 17:11:42.799628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.871  [2024-12-09 17:11:42.799773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:22:19.871  [2024-12-09 17:11:42.799787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.799799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.799885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.872  [2024-12-09 17:11:42.799894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:22:19.872  [2024-12-09 17:11:42.799901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.799907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.799936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.872  [2024-12-09 17:11:42.799944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:22:19.872  [2024-12-09 17:11:42.799951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.799957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.800037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.872  [2024-12-09 17:11:42.800045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:22:19.872  [2024-12-09 17:11:42.800054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.800060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.800085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.872  [2024-12-09 17:11:42.800093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:22:19.872  [2024-12-09 17:11:42.800099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.800106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.800140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.872  [2024-12-09 17:11:42.800148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:22:19.872  [2024-12-09 17:11:42.800154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.800161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.800200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:19.872  [2024-12-09 17:11:42.800208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:22:19.872  [2024-12-09 17:11:42.800215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:19.872  [2024-12-09 17:11:42.800221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:19.872  [2024-12-09 17:11:42.800329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 284.626 ms, result 0
00:22:20.817  
00:22:20.817  
00:22:20.817   17:11:43 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144
00:22:20.817  [2024-12-09 17:11:43.623713] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:22:20.817  [2024-12-09 17:11:43.623868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79695 ]
00:22:20.817  [2024-12-09 17:11:43.782604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:21.078  [2024-12-09 17:11:43.872346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:21.078  [2024-12-09 17:11:44.105319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:22:21.078  [2024-12-09 17:11:44.105380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:22:21.341  [2024-12-09 17:11:44.261647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.261689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:22:21.341  [2024-12-09 17:11:44.261702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:22:21.341  [2024-12-09 17:11:44.261708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.261749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.261760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:22:21.341  [2024-12-09 17:11:44.261767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:22:21.341  [2024-12-09 17:11:44.261773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.261787] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:22:21.341  [2024-12-09 17:11:44.262334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:22:21.341  [2024-12-09 17:11:44.262352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.262359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:22:21.341  [2024-12-09 17:11:44.262366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.569 ms
00:22:21.341  [2024-12-09 17:11:44.262373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.263622] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:22:21.341  [2024-12-09 17:11:44.274473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.274502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:22:21.341  [2024-12-09 17:11:44.274512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.852 ms
00:22:21.341  [2024-12-09 17:11:44.274518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.274570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.274577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:22:21.341  [2024-12-09 17:11:44.274584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:22:21.341  [2024-12-09 17:11:44.274590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.281033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.281197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:22:21.341  [2024-12-09 17:11:44.281209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.401 ms
00:22:21.341  [2024-12-09 17:11:44.281220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.281279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.281286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:22:21.341  [2024-12-09 17:11:44.281293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.044 ms
00:22:21.341  [2024-12-09 17:11:44.281299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.341  [2024-12-09 17:11:44.281340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.341  [2024-12-09 17:11:44.281348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:22:21.341  [2024-12-09 17:11:44.281355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:22:21.341  [2024-12-09 17:11:44.281362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.342  [2024-12-09 17:11:44.281379] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:22:21.342  [2024-12-09 17:11:44.284335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.342  [2024-12-09 17:11:44.284442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:22:21.342  [2024-12-09 17:11:44.284473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.960 ms
00:22:21.342  [2024-12-09 17:11:44.284479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.342  [2024-12-09 17:11:44.284512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.342  [2024-12-09 17:11:44.284519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:22:21.342  [2024-12-09 17:11:44.284526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:22:21.342  [2024-12-09 17:11:44.284532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.342  [2024-12-09 17:11:44.284548] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:22:21.342  [2024-12-09 17:11:44.284567] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:22:21.342  [2024-12-09 17:11:44.284596] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:22:21.342  [2024-12-09 17:11:44.284611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:22:21.342  [2024-12-09 17:11:44.284696] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:22:21.342  [2024-12-09 17:11:44.284705] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:22:21.342  [2024-12-09 17:11:44.284714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:22:21.342  [2024-12-09 17:11:44.284723] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:22:21.342  [2024-12-09 17:11:44.284729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:22:21.342  [2024-12-09 17:11:44.284736] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:22:21.342  [2024-12-09 17:11:44.284742] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:22:21.342  [2024-12-09 17:11:44.284751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:22:21.342  [2024-12-09 17:11:44.284758] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:22:21.342  [2024-12-09 17:11:44.284765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.342  [2024-12-09 17:11:44.284772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:22:21.342  [2024-12-09 17:11:44.284778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.220 ms
00:22:21.342  [2024-12-09 17:11:44.284783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.342  [2024-12-09 17:11:44.284862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.342  [2024-12-09 17:11:44.284870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:22:21.342  [2024-12-09 17:11:44.284876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.067 ms
00:22:21.342  [2024-12-09 17:11:44.284882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.342  [2024-12-09 17:11:44.284964] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:22:21.342  [2024-12-09 17:11:44.284972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:22:21.342  [2024-12-09 17:11:44.284979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:22:21.342  [2024-12-09 17:11:44.284986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.284993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:22:21.342  [2024-12-09 17:11:44.284999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:22:21.342  [2024-12-09 17:11:44.285017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:22:21.342  [2024-12-09 17:11:44.285028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:22:21.342  [2024-12-09 17:11:44.285035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:22:21.342  [2024-12-09 17:11:44.285041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:22:21.342  [2024-12-09 17:11:44.285051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:22:21.342  [2024-12-09 17:11:44.285057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:22:21.342  [2024-12-09 17:11:44.285062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:22:21.342  [2024-12-09 17:11:44.285072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:22:21.342  [2024-12-09 17:11:44.285089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:22:21.342  [2024-12-09 17:11:44.285104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:22:21.342  [2024-12-09 17:11:44.285118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:22:21.342  [2024-12-09 17:11:44.285133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:22:21.342  [2024-12-09 17:11:44.285150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:22:21.342  [2024-12-09 17:11:44.285160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:22:21.342  [2024-12-09 17:11:44.285165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:22:21.342  [2024-12-09 17:11:44.285170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:22:21.342  [2024-12-09 17:11:44.285175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:22:21.342  [2024-12-09 17:11:44.285180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:22:21.342  [2024-12-09 17:11:44.285185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:22:21.342  [2024-12-09 17:11:44.285203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:22:21.342  [2024-12-09 17:11:44.285208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285215] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:22:21.342  [2024-12-09 17:11:44.285221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:22:21.342  [2024-12-09 17:11:44.285227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:21.342  [2024-12-09 17:11:44.285238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:22:21.342  [2024-12-09 17:11:44.285244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:22:21.342  [2024-12-09 17:11:44.285250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:22:21.342  [2024-12-09 17:11:44.285255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:22:21.342  [2024-12-09 17:11:44.285260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:22:21.342  [2024-12-09 17:11:44.285265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:22:21.342  [2024-12-09 17:11:44.285271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:22:21.342  [2024-12-09 17:11:44.285279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:22:21.342  [2024-12-09 17:11:44.285287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:22:21.342  [2024-12-09 17:11:44.285293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:22:21.342  [2024-12-09 17:11:44.285299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:22:21.342  [2024-12-09 17:11:44.285304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:22:21.342  [2024-12-09 17:11:44.285309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:22:21.342  [2024-12-09 17:11:44.285314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:22:21.342  [2024-12-09 17:11:44.285320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:22:21.342  [2024-12-09 17:11:44.285325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:22:21.342  [2024-12-09 17:11:44.285331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:22:21.342  [2024-12-09 17:11:44.285336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:22:21.342  [2024-12-09 17:11:44.285341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:22:21.342  [2024-12-09 17:11:44.285346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:22:21.342  [2024-12-09 17:11:44.285351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:22:21.342  [2024-12-09 17:11:44.285358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:22:21.342  [2024-12-09 17:11:44.285363] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:22:21.342  [2024-12-09 17:11:44.285369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:22:21.343  [2024-12-09 17:11:44.285375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:22:21.343  [2024-12-09 17:11:44.285380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:22:21.343  [2024-12-09 17:11:44.285385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:22:21.343  [2024-12-09 17:11:44.285391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:22:21.343  [2024-12-09 17:11:44.285398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.285404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:22:21.343  [2024-12-09 17:11:44.285411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.486 ms
00:22:21.343  [2024-12-09 17:11:44.285416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.309770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.309801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:22:21.343  [2024-12-09 17:11:44.309810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.308 ms
00:22:21.343  [2024-12-09 17:11:44.309820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.309899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.309906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:22:21.343  [2024-12-09 17:11:44.309913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.061 ms
00:22:21.343  [2024-12-09 17:11:44.309919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.351776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.351809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:22:21.343  [2024-12-09 17:11:44.351819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 41.815 ms
00:22:21.343  [2024-12-09 17:11:44.351826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.351871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.351880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:22:21.343  [2024-12-09 17:11:44.351889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:22:21.343  [2024-12-09 17:11:44.351896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.352315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.352336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:22:21.343  [2024-12-09 17:11:44.352344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.378 ms
00:22:21.343  [2024-12-09 17:11:44.352351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.352474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.352482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:22:21.343  [2024-12-09 17:11:44.352489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.107 ms
00:22:21.343  [2024-12-09 17:11:44.352499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.364499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.364524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:22:21.343  [2024-12-09 17:11:44.364535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.983 ms
00:22:21.343  [2024-12-09 17:11:44.364542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.343  [2024-12-09 17:11:44.374690] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:22:21.343  [2024-12-09 17:11:44.374824] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:22:21.343  [2024-12-09 17:11:44.374837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.343  [2024-12-09 17:11:44.374875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:22:21.343  [2024-12-09 17:11:44.374883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.204 ms
00:22:21.343  [2024-12-09 17:11:44.374889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.394015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.394043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:22:21.605  [2024-12-09 17:11:44.394052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.097 ms
00:22:21.605  [2024-12-09 17:11:44.394060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.403589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.403615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:22:21.605  [2024-12-09 17:11:44.403624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.489 ms
00:22:21.605  [2024-12-09 17:11:44.403630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.412865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.412973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:22:21.605  [2024-12-09 17:11:44.412986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.208 ms
00:22:21.605  [2024-12-09 17:11:44.412993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.413455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.413466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:22:21.605  [2024-12-09 17:11:44.413476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.403 ms
00:22:21.605  [2024-12-09 17:11:44.413483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.461625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.461665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:22:21.605  [2024-12-09 17:11:44.461679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 48.128 ms
00:22:21.605  [2024-12-09 17:11:44.461687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.469747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:22:21.605  [2024-12-09 17:11:44.471806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.471832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:22:21.605  [2024-12-09 17:11:44.471842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.083 ms
00:22:21.605  [2024-12-09 17:11:44.471860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.471920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.471929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:22:21.605  [2024-12-09 17:11:44.471939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:22:21.605  [2024-12-09 17:11:44.471946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.472035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.472045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:22:21.605  [2024-12-09 17:11:44.472052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.044 ms
00:22:21.605  [2024-12-09 17:11:44.472059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.472075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.472082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:22:21.605  [2024-12-09 17:11:44.472088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:22:21.605  [2024-12-09 17:11:44.472095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.472127] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:22:21.605  [2024-12-09 17:11:44.472135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.472141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:22:21.605  [2024-12-09 17:11:44.472148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:22:21.605  [2024-12-09 17:11:44.472155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.491383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.491507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:22:21.605  [2024-12-09 17:11:44.491525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.215 ms
00:22:21.605  [2024-12-09 17:11:44.491532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.491587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:21.605  [2024-12-09 17:11:44.491596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:22:21.605  [2024-12-09 17:11:44.491602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:22:21.605  [2024-12-09 17:11:44.491608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:21.605  [2024-12-09 17:11:44.492517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.482 ms, result 0
00:22:22.996  
[2024-12-09T17:11:46.983Z] Copying: 16/1024 [MB] (16 MBps)
[2024-12-09T17:11:47.926Z] Copying: 32/1024 [MB] (15 MBps)
[2024-12-09T17:11:48.871Z] Copying: 45/1024 [MB] (13 MBps)
[2024-12-09T17:11:49.816Z] Copying: 57/1024 [MB] (11 MBps)
[2024-12-09T17:11:50.760Z] Copying: 74/1024 [MB] (16 MBps)
[2024-12-09T17:11:51.706Z] Copying: 87/1024 [MB] (12 MBps)
[2024-12-09T17:11:52.781Z] Copying: 98/1024 [MB] (11 MBps)
[2024-12-09T17:11:53.725Z] Copying: 113/1024 [MB] (14 MBps)
[2024-12-09T17:11:54.666Z] Copying: 123/1024 [MB] (10 MBps)
[2024-12-09T17:11:56.050Z] Copying: 134/1024 [MB] (10 MBps)
[2024-12-09T17:11:56.991Z] Copying: 144/1024 [MB] (10 MBps)
[2024-12-09T17:11:57.935Z] Copying: 155/1024 [MB] (10 MBps)
[2024-12-09T17:11:58.878Z] Copying: 167/1024 [MB] (12 MBps)
[2024-12-09T17:11:59.822Z] Copying: 181392/1048576 [kB] (9992 kBps)
[2024-12-09T17:12:00.765Z] Copying: 191452/1048576 [kB] (10060 kBps)
[2024-12-09T17:12:01.709Z] Copying: 199/1024 [MB] (12 MBps)
[2024-12-09T17:12:02.653Z] Copying: 213848/1048576 [kB] (9768 kBps)
[2024-12-09T17:12:04.040Z] Copying: 220/1024 [MB] (11 MBps)
[2024-12-09T17:12:04.984Z] Copying: 235/1024 [MB] (14 MBps)
[2024-12-09T17:12:05.936Z] Copying: 250024/1048576 [kB] (9316 kBps)
[2024-12-09T17:12:06.881Z] Copying: 255/1024 [MB] (11 MBps)
[2024-12-09T17:12:07.827Z] Copying: 266/1024 [MB] (11 MBps)
[2024-12-09T17:12:08.772Z] Copying: 281/1024 [MB] (14 MBps)
[2024-12-09T17:12:09.716Z] Copying: 292/1024 [MB] (11 MBps)
[2024-12-09T17:12:10.661Z] Copying: 306/1024 [MB] (13 MBps)
[2024-12-09T17:12:12.054Z] Copying: 317/1024 [MB] (11 MBps)
[2024-12-09T17:12:13.001Z] Copying: 329/1024 [MB] (11 MBps)
[2024-12-09T17:12:13.948Z] Copying: 344/1024 [MB] (14 MBps)
[2024-12-09T17:12:14.893Z] Copying: 357/1024 [MB] (12 MBps)
[2024-12-09T17:12:15.836Z] Copying: 374992/1048576 [kB] (9400 kBps)
[2024-12-09T17:12:16.779Z] Copying: 376/1024 [MB] (10 MBps)
[2024-12-09T17:12:17.726Z] Copying: 395/1024 [MB] (19 MBps)
[2024-12-09T17:12:18.670Z] Copying: 408/1024 [MB] (13 MBps)
[2024-12-09T17:12:20.059Z] Copying: 424/1024 [MB] (16 MBps)
[2024-12-09T17:12:20.682Z] Copying: 442/1024 [MB] (17 MBps)
[2024-12-09T17:12:22.069Z] Copying: 456/1024 [MB] (14 MBps)
[2024-12-09T17:12:22.643Z] Copying: 473/1024 [MB] (16 MBps)
[2024-12-09T17:12:24.030Z] Copying: 483/1024 [MB] (10 MBps)
[2024-12-09T17:12:24.975Z] Copying: 495/1024 [MB] (12 MBps)
[2024-12-09T17:12:25.919Z] Copying: 506/1024 [MB] (10 MBps)
[2024-12-09T17:12:26.863Z] Copying: 519/1024 [MB] (12 MBps)
[2024-12-09T17:12:27.808Z] Copying: 531/1024 [MB] (12 MBps)
[2024-12-09T17:12:28.752Z] Copying: 543/1024 [MB] (11 MBps)
[2024-12-09T17:12:29.697Z] Copying: 554/1024 [MB] (11 MBps)
[2024-12-09T17:12:30.642Z] Copying: 565/1024 [MB] (10 MBps)
[2024-12-09T17:12:32.034Z] Copying: 577/1024 [MB] (12 MBps)
[2024-12-09T17:12:32.647Z] Copying: 590/1024 [MB] (13 MBps)
[2024-12-09T17:12:34.029Z] Copying: 607/1024 [MB] (17 MBps)
[2024-12-09T17:12:35.027Z] Copying: 619/1024 [MB] (11 MBps)
[2024-12-09T17:12:35.977Z] Copying: 631/1024 [MB] (11 MBps)
[2024-12-09T17:12:36.920Z] Copying: 656/1024 [MB] (24 MBps)
[2024-12-09T17:12:37.863Z] Copying: 670/1024 [MB] (14 MBps)
[2024-12-09T17:12:38.805Z] Copying: 685/1024 [MB] (14 MBps)
[2024-12-09T17:12:39.749Z] Copying: 705/1024 [MB] (19 MBps)
[2024-12-09T17:12:40.694Z] Copying: 721/1024 [MB] (15 MBps)
[2024-12-09T17:12:41.637Z] Copying: 737/1024 [MB] (15 MBps)
[2024-12-09T17:12:43.022Z] Copying: 753/1024 [MB] (16 MBps)
[2024-12-09T17:12:43.966Z] Copying: 764/1024 [MB] (10 MBps)
[2024-12-09T17:12:44.911Z] Copying: 776/1024 [MB] (11 MBps)
[2024-12-09T17:12:45.854Z] Copying: 786/1024 [MB] (10 MBps)
[2024-12-09T17:12:46.797Z] Copying: 802/1024 [MB] (15 MBps)
[2024-12-09T17:12:47.741Z] Copying: 814/1024 [MB] (12 MBps)
[2024-12-09T17:12:48.684Z] Copying: 843832/1048576 [kB] (9448 kBps)
[2024-12-09T17:12:49.656Z] Copying: 834/1024 [MB] (10 MBps)
[2024-12-09T17:12:51.044Z] Copying: 847/1024 [MB] (13 MBps)
[2024-12-09T17:12:51.987Z] Copying: 877928/1048576 [kB] (9772 kBps)
[2024-12-09T17:12:52.930Z] Copying: 867/1024 [MB] (10 MBps)
[2024-12-09T17:12:53.874Z] Copying: 879/1024 [MB] (11 MBps)
[2024-12-09T17:12:54.820Z] Copying: 890/1024 [MB] (10 MBps)
[2024-12-09T17:12:55.763Z] Copying: 900/1024 [MB] (10 MBps)
[2024-12-09T17:12:56.709Z] Copying: 914/1024 [MB] (13 MBps)
[2024-12-09T17:12:57.653Z] Copying: 933/1024 [MB] (19 MBps)
[2024-12-09T17:12:59.040Z] Copying: 948/1024 [MB] (15 MBps)
[2024-12-09T17:12:59.985Z] Copying: 960/1024 [MB] (12 MBps)
[2024-12-09T17:13:00.926Z] Copying: 974/1024 [MB] (14 MBps)
[2024-12-09T17:13:01.869Z] Copying: 989/1024 [MB] (15 MBps)
[2024-12-09T17:13:02.814Z] Copying: 1005/1024 [MB] (15 MBps)
[2024-12-09T17:13:03.075Z] Copying: 1019/1024 [MB] (13 MBps)
[2024-12-09T17:13:03.075Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-09 17:13:02.891062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.891124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:40.034  [2024-12-09 17:13:02.891148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:40.034  [2024-12-09 17:13:02.891160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.891189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:40.034  [2024-12-09 17:13:02.894840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.894899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:40.034  [2024-12-09 17:13:02.894915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.631 ms
00:23:40.034  [2024-12-09 17:13:02.894927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.895222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.895249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:40.034  [2024-12-09 17:13:02.895262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.261 ms
00:23:40.034  [2024-12-09 17:13:02.895275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.899069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.899093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:40.034  [2024-12-09 17:13:02.899102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.781 ms
00:23:40.034  [2024-12-09 17:13:02.899116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.905640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.905669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:40.034  [2024-12-09 17:13:02.905681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.509 ms
00:23:40.034  [2024-12-09 17:13:02.905690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.931303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.931337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:40.034  [2024-12-09 17:13:02.931349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.560 ms
00:23:40.034  [2024-12-09 17:13:02.931357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.946933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.946969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:40.034  [2024-12-09 17:13:02.946980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.541 ms
00:23:40.034  [2024-12-09 17:13:02.946988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.947135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.947147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:40.034  [2024-12-09 17:13:02.947156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.105 ms
00:23:40.034  [2024-12-09 17:13:02.947164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.971923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.971956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:40.034  [2024-12-09 17:13:02.971968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.744 ms
00:23:40.034  [2024-12-09 17:13:02.971976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:02.996173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:02.996210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:40.034  [2024-12-09 17:13:02.996221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.161 ms
00:23:40.034  [2024-12-09 17:13:02.996229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:03.020453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:03.020494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:40.034  [2024-12-09 17:13:03.020506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.185 ms
00:23:40.034  [2024-12-09 17:13:03.020513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:03.045373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.034  [2024-12-09 17:13:03.045417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:40.034  [2024-12-09 17:13:03.045428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.790 ms
00:23:40.034  [2024-12-09 17:13:03.045437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.034  [2024-12-09 17:13:03.045481] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:40.034  [2024-12-09 17:13:03.045506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.034  [2024-12-09 17:13:03.045675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.045994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:40.035  [2024-12-09 17:13:03.046383] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:40.035  [2024-12-09 17:13:03.046393] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         ef202dc6-70d9-47d6-9bf2-4a23092fd7e2
00:23:40.035  [2024-12-09 17:13:03.046404] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:40.035  [2024-12-09 17:13:03.046411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:40.035  [2024-12-09 17:13:03.046419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:40.035  [2024-12-09 17:13:03.046427] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:40.035  [2024-12-09 17:13:03.046443] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:40.035  [2024-12-09 17:13:03.046450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:40.035  [2024-12-09 17:13:03.046459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:40.035  [2024-12-09 17:13:03.046466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:40.035  [2024-12-09 17:13:03.046473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:40.035  [2024-12-09 17:13:03.046480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.035  [2024-12-09 17:13:03.046488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:40.035  [2024-12-09 17:13:03.046497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.000 ms
00:23:40.035  [2024-12-09 17:13:03.046507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.036  [2024-12-09 17:13:03.060669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.036  [2024-12-09 17:13:03.060710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:40.036  [2024-12-09 17:13:03.060722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.143 ms
00:23:40.036  [2024-12-09 17:13:03.060730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.036  [2024-12-09 17:13:03.061190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:40.036  [2024-12-09 17:13:03.061209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:40.036  [2024-12-09 17:13:03.061225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.423 ms
00:23:40.036  [2024-12-09 17:13:03.061233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.099152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.099200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:40.296  [2024-12-09 17:13:03.099214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.099224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.099294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.099304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:40.296  [2024-12-09 17:13:03.099320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.099330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.099400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.099413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:40.296  [2024-12-09 17:13:03.099423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.099433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.099451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.099461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:40.296  [2024-12-09 17:13:03.099470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.099482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.184159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.184338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:40.296  [2024-12-09 17:13:03.184359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.184369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.251976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.252026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:40.296  [2024-12-09 17:13:03.252044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.252053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.252153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.252164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:40.296  [2024-12-09 17:13:03.252173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.252182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.252217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.252228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:40.296  [2024-12-09 17:13:03.252236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.252245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.252343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.252353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:40.296  [2024-12-09 17:13:03.252361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.252369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.252400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.252411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:40.296  [2024-12-09 17:13:03.252430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.296  [2024-12-09 17:13:03.252438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.296  [2024-12-09 17:13:03.252485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.296  [2024-12-09 17:13:03.252496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:40.297  [2024-12-09 17:13:03.252506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.297  [2024-12-09 17:13:03.252514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.297  [2024-12-09 17:13:03.252564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:40.297  [2024-12-09 17:13:03.252576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:40.297  [2024-12-09 17:13:03.252585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:40.297  [2024-12-09 17:13:03.252594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:40.297  [2024-12-09 17:13:03.252734] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.633 ms, result 0
00:23:40.908  
00:23:40.908  
00:23:40.908   17:13:03 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:23:43.453  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:23:43.453   17:13:06 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072
00:23:43.453  [2024-12-09 17:13:06.239129] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:23:43.453  [2024-12-09 17:13:06.239282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80535 ]
00:23:43.453  [2024-12-09 17:13:06.398981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:43.714  [2024-12-09 17:13:06.505443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:43.714  [2024-12-09 17:13:06.738923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:43.714  [2024-12-09 17:13:06.738980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:43.976  [2024-12-09 17:13:06.896044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.976  [2024-12-09 17:13:06.896085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:43.976  [2024-12-09 17:13:06.896096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:43.977  [2024-12-09 17:13:06.896103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.896142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.896153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:43.977  [2024-12-09 17:13:06.896160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.026 ms
00:23:43.977  [2024-12-09 17:13:06.896166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.896179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:43.977  [2024-12-09 17:13:06.896733] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:43.977  [2024-12-09 17:13:06.896746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.896753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:43.977  [2024-12-09 17:13:06.896760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.571 ms
00:23:43.977  [2024-12-09 17:13:06.896765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.898104] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:43.977  [2024-12-09 17:13:06.908505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.908645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:43.977  [2024-12-09 17:13:06.908660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.402 ms
00:23:43.977  [2024-12-09 17:13:06.908667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.908715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.908724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:43.977  [2024-12-09 17:13:06.908730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:23:43.977  [2024-12-09 17:13:06.908736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.915048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.915155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:43.977  [2024-12-09 17:13:06.915168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.271 ms
00:23:43.977  [2024-12-09 17:13:06.915179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.915236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.915243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:43.977  [2024-12-09 17:13:06.915250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.043 ms
00:23:43.977  [2024-12-09 17:13:06.915256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.915296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.915303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:43.977  [2024-12-09 17:13:06.915310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:43.977  [2024-12-09 17:13:06.915316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.915333] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:43.977  [2024-12-09 17:13:06.918282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.918380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:43.977  [2024-12-09 17:13:06.918396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.953 ms
00:23:43.977  [2024-12-09 17:13:06.918403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.918434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.918442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:43.977  [2024-12-09 17:13:06.918448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:23:43.977  [2024-12-09 17:13:06.918454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.918470] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:43.977  [2024-12-09 17:13:06.918488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:43.977  [2024-12-09 17:13:06.918518] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:43.977  [2024-12-09 17:13:06.918533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:43.977  [2024-12-09 17:13:06.918616] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:43.977  [2024-12-09 17:13:06.918625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:43.977  [2024-12-09 17:13:06.918634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:43.977  [2024-12-09 17:13:06.918643] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:43.977  [2024-12-09 17:13:06.918651] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:43.977  [2024-12-09 17:13:06.918658] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:23:43.977  [2024-12-09 17:13:06.918664] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:43.977  [2024-12-09 17:13:06.918672] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:43.977  [2024-12-09 17:13:06.918679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:43.977  [2024-12-09 17:13:06.918686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.918692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:43.977  [2024-12-09 17:13:06.918698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.219 ms
00:23:43.977  [2024-12-09 17:13:06.918703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.918767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.977  [2024-12-09 17:13:06.918774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:43.977  [2024-12-09 17:13:06.918780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:23:43.977  [2024-12-09 17:13:06.918786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.977  [2024-12-09 17:13:06.918877] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:43.977  [2024-12-09 17:13:06.918887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:43.977  [2024-12-09 17:13:06.918895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:43.977  [2024-12-09 17:13:06.918901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:43.977  [2024-12-09 17:13:06.918911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:43.977  [2024-12-09 17:13:06.918917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:43.977  [2024-12-09 17:13:06.918922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:23:43.977  [2024-12-09 17:13:06.918927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:43.977  [2024-12-09 17:13:06.918934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:23:43.977  [2024-12-09 17:13:06.918939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:43.977  [2024-12-09 17:13:06.918946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:43.977  [2024-12-09 17:13:06.918952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:23:43.977  [2024-12-09 17:13:06.918958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:43.977  [2024-12-09 17:13:06.918970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:43.977  [2024-12-09 17:13:06.918976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:23:43.977  [2024-12-09 17:13:06.918981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:43.977  [2024-12-09 17:13:06.918986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:43.977  [2024-12-09 17:13:06.918992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:23:43.977  [2024-12-09 17:13:06.918997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:43.977  [2024-12-09 17:13:06.919002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:43.977  [2024-12-09 17:13:06.919007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:23:43.977  [2024-12-09 17:13:06.919012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:43.977  [2024-12-09 17:13:06.919017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:43.977  [2024-12-09 17:13:06.919022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:23:43.977  [2024-12-09 17:13:06.919028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:43.977  [2024-12-09 17:13:06.919034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:43.977  [2024-12-09 17:13:06.919039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:23:43.977  [2024-12-09 17:13:06.919044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:43.977  [2024-12-09 17:13:06.919049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:43.977  [2024-12-09 17:13:06.919055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:23:43.977  [2024-12-09 17:13:06.919060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:43.977  [2024-12-09 17:13:06.919067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:43.977  [2024-12-09 17:13:06.919072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:23:43.977  [2024-12-09 17:13:06.919076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:43.978  [2024-12-09 17:13:06.919081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:43.978  [2024-12-09 17:13:06.919087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:23:43.978  [2024-12-09 17:13:06.919092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:43.978  [2024-12-09 17:13:06.919097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:43.978  [2024-12-09 17:13:06.919102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:23:43.978  [2024-12-09 17:13:06.919107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:43.978  [2024-12-09 17:13:06.919113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:43.978  [2024-12-09 17:13:06.919118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:23:43.978  [2024-12-09 17:13:06.919123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:43.978  [2024-12-09 17:13:06.919129] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:43.978  [2024-12-09 17:13:06.919142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:43.978  [2024-12-09 17:13:06.919148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:43.978  [2024-12-09 17:13:06.919154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:43.978  [2024-12-09 17:13:06.919160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:43.978  [2024-12-09 17:13:06.919166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:43.978  [2024-12-09 17:13:06.919171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:43.978  [2024-12-09 17:13:06.919176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:43.978  [2024-12-09 17:13:06.919181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:43.978  [2024-12-09 17:13:06.919187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:43.978  [2024-12-09 17:13:06.919193] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:43.978  [2024-12-09 17:13:06.919200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:23:43.978  [2024-12-09 17:13:06.919215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:23:43.978  [2024-12-09 17:13:06.919221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:23:43.978  [2024-12-09 17:13:06.919227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:23:43.978  [2024-12-09 17:13:06.919233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:23:43.978  [2024-12-09 17:13:06.919239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:23:43.978  [2024-12-09 17:13:06.919246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:23:43.978  [2024-12-09 17:13:06.919252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:23:43.978  [2024-12-09 17:13:06.919258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:23:43.978  [2024-12-09 17:13:06.919263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:23:43.978  [2024-12-09 17:13:06.919292] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:43.978  [2024-12-09 17:13:06.919298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:43.978  [2024-12-09 17:13:06.919311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:43.978  [2024-12-09 17:13:06.919316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:43.978  [2024-12-09 17:13:06.919322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:43.978  [2024-12-09 17:13:06.919327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.919334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:43.978  [2024-12-09 17:13:06.919340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.516 ms
00:23:43.978  [2024-12-09 17:13:06.919345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.943544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.943574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:43.978  [2024-12-09 17:13:06.943583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.156 ms
00:23:43.978  [2024-12-09 17:13:06.943592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.943658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.943665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:43.978  [2024-12-09 17:13:06.943672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:23:43.978  [2024-12-09 17:13:06.943678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.983085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.983116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:43.978  [2024-12-09 17:13:06.983126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.365 ms
00:23:43.978  [2024-12-09 17:13:06.983133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.983166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.983174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:43.978  [2024-12-09 17:13:06.983183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:43.978  [2024-12-09 17:13:06.983189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.983596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.983610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:43.978  [2024-12-09 17:13:06.983618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.366 ms
00:23:43.978  [2024-12-09 17:13:06.983625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.983731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.983740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:43.978  [2024-12-09 17:13:06.983746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.092 ms
00:23:43.978  [2024-12-09 17:13:06.983756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:06.995553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:06.995580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:43.978  [2024-12-09 17:13:06.995591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.781 ms
00:23:43.978  [2024-12-09 17:13:06.995597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.978  [2024-12-09 17:13:07.005806] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:23:43.978  [2024-12-09 17:13:07.005835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:43.978  [2024-12-09 17:13:07.005860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.978  [2024-12-09 17:13:07.005868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:43.978  [2024-12-09 17:13:07.005875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.171 ms
00:23:43.978  [2024-12-09 17:13:07.005881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.024629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.024656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:44.240  [2024-12-09 17:13:07.024665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.716 ms
00:23:44.240  [2024-12-09 17:13:07.024672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.034040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.034065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:44.240  [2024-12-09 17:13:07.034072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.331 ms
00:23:44.240  [2024-12-09 17:13:07.034079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.043133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.043159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:44.240  [2024-12-09 17:13:07.043167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.028 ms
00:23:44.240  [2024-12-09 17:13:07.043173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.043630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.043641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:44.240  [2024-12-09 17:13:07.043650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.399 ms
00:23:44.240  [2024-12-09 17:13:07.043657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.091537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.091689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:44.240  [2024-12-09 17:13:07.091709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.867 ms
00:23:44.240  [2024-12-09 17:13:07.091715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.100094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:23:44.240  [2024-12-09 17:13:07.102284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.102383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:44.240  [2024-12-09 17:13:07.102396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.538 ms
00:23:44.240  [2024-12-09 17:13:07.102403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.102462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.102471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:44.240  [2024-12-09 17:13:07.102481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:23:44.240  [2024-12-09 17:13:07.102487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.102562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.102570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:44.240  [2024-12-09 17:13:07.102577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.029 ms
00:23:44.240  [2024-12-09 17:13:07.102584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.102600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.102607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:44.240  [2024-12-09 17:13:07.102614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:44.240  [2024-12-09 17:13:07.102620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.102651] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:44.240  [2024-12-09 17:13:07.102660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.240  [2024-12-09 17:13:07.102665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:44.240  [2024-12-09 17:13:07.102672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:23:44.240  [2024-12-09 17:13:07.102678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.240  [2024-12-09 17:13:07.120482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.241  [2024-12-09 17:13:07.120508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:44.241  [2024-12-09 17:13:07.120520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.790 ms
00:23:44.241  [2024-12-09 17:13:07.120527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.241  [2024-12-09 17:13:07.120583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:44.241  [2024-12-09 17:13:07.120591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:44.241  [2024-12-09 17:13:07.120598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:23:44.241  [2024-12-09 17:13:07.120604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.241  [2024-12-09 17:13:07.121524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.102 ms, result 0
00:23:45.184  
[2024-12-09T17:13:09.170Z] Copying: 20/1024 [MB] (20 MBps)
[2024-12-09T17:13:10.557Z] Copying: 36/1024 [MB] (16 MBps)
[2024-12-09T17:13:11.501Z] Copying: 53/1024 [MB] (17 MBps)
[2024-12-09T17:13:12.444Z] Copying: 70/1024 [MB] (17 MBps)
[2024-12-09T17:13:13.389Z] Copying: 91/1024 [MB] (20 MBps)
[2024-12-09T17:13:14.332Z] Copying: 119/1024 [MB] (27 MBps)
[2024-12-09T17:13:15.272Z] Copying: 139/1024 [MB] (20 MBps)
[2024-12-09T17:13:16.216Z] Copying: 165/1024 [MB] (25 MBps)
[2024-12-09T17:13:17.159Z] Copying: 180/1024 [MB] (15 MBps)
[2024-12-09T17:13:18.575Z] Copying: 196/1024 [MB] (15 MBps)
[2024-12-09T17:13:19.163Z] Copying: 216/1024 [MB] (20 MBps)
[2024-12-09T17:13:20.550Z] Copying: 232/1024 [MB] (15 MBps)
[2024-12-09T17:13:21.495Z] Copying: 256/1024 [MB] (24 MBps)
[2024-12-09T17:13:22.442Z] Copying: 277/1024 [MB] (21 MBps)
[2024-12-09T17:13:23.379Z] Copying: 297/1024 [MB] (19 MBps)
[2024-12-09T17:13:24.322Z] Copying: 329/1024 [MB] (32 MBps)
[2024-12-09T17:13:25.267Z] Copying: 362/1024 [MB] (32 MBps)
[2024-12-09T17:13:26.211Z] Copying: 388/1024 [MB] (26 MBps)
[2024-12-09T17:13:27.155Z] Copying: 404/1024 [MB] (16 MBps)
[2024-12-09T17:13:28.544Z] Copying: 423/1024 [MB] (18 MBps)
[2024-12-09T17:13:29.490Z] Copying: 445/1024 [MB] (22 MBps)
[2024-12-09T17:13:30.434Z] Copying: 465/1024 [MB] (20 MBps)
[2024-12-09T17:13:31.379Z] Copying: 481/1024 [MB] (15 MBps)
[2024-12-09T17:13:32.325Z] Copying: 499/1024 [MB] (17 MBps)
[2024-12-09T17:13:33.362Z] Copying: 512/1024 [MB] (13 MBps)
[2024-12-09T17:13:34.308Z] Copying: 529/1024 [MB] (16 MBps)
[2024-12-09T17:13:35.254Z] Copying: 545/1024 [MB] (16 MBps)
[2024-12-09T17:13:36.197Z] Copying: 567/1024 [MB] (21 MBps)
[2024-12-09T17:13:37.142Z] Copying: 577/1024 [MB] (10 MBps)
[2024-12-09T17:13:38.531Z] Copying: 599/1024 [MB] (21 MBps)
[2024-12-09T17:13:39.476Z] Copying: 633/1024 [MB] (34 MBps)
[2024-12-09T17:13:40.423Z] Copying: 646/1024 [MB] (13 MBps)
[2024-12-09T17:13:41.369Z] Copying: 661/1024 [MB] (15 MBps)
[2024-12-09T17:13:42.314Z] Copying: 675/1024 [MB] (14 MBps)
[2024-12-09T17:13:43.259Z] Copying: 691/1024 [MB] (15 MBps)
[2024-12-09T17:13:44.204Z] Copying: 706/1024 [MB] (15 MBps)
[2024-12-09T17:13:45.151Z] Copying: 729/1024 [MB] (22 MBps)
[2024-12-09T17:13:46.540Z] Copying: 743/1024 [MB] (13 MBps)
[2024-12-09T17:13:47.510Z] Copying: 760/1024 [MB] (17 MBps)
[2024-12-09T17:13:48.481Z] Copying: 794/1024 [MB] (34 MBps)
[2024-12-09T17:13:49.425Z] Copying: 810/1024 [MB] (16 MBps)
[2024-12-09T17:13:50.372Z] Copying: 826/1024 [MB] (15 MBps)
[2024-12-09T17:13:51.318Z] Copying: 839/1024 [MB] (13 MBps)
[2024-12-09T17:13:52.263Z] Copying: 856/1024 [MB] (16 MBps)
[2024-12-09T17:13:53.206Z] Copying: 870/1024 [MB] (14 MBps)
[2024-12-09T17:13:54.149Z] Copying: 881/1024 [MB] (11 MBps)
[2024-12-09T17:13:55.533Z] Copying: 912164/1048576 [kB] (9620 kBps)
[2024-12-09T17:13:56.475Z] Copying: 905/1024 [MB] (14 MBps)
[2024-12-09T17:13:57.419Z] Copying: 926/1024 [MB] (21 MBps)
[2024-12-09T17:13:58.362Z] Copying: 945/1024 [MB] (18 MBps)
[2024-12-09T17:13:59.306Z] Copying: 957/1024 [MB] (12 MBps)
[2024-12-09T17:14:00.248Z] Copying: 971/1024 [MB] (13 MBps)
[2024-12-09T17:14:01.191Z] Copying: 984/1024 [MB] (12 MBps)
[2024-12-09T17:14:02.168Z] Copying: 996/1024 [MB] (11 MBps)
[2024-12-09T17:14:03.555Z] Copying: 1007/1024 [MB] (11 MBps)
[2024-12-09T17:14:03.818Z] Copying: 1022/1024 [MB] (14 MBps)
[2024-12-09T17:14:03.818Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-09 17:14:03.730471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.730528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:40.777  [2024-12-09 17:14:03.730549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:24:40.777  [2024-12-09 17:14:03.730557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.777  [2024-12-09 17:14:03.732374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:40.777  [2024-12-09 17:14:03.735472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.735502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:40.777  [2024-12-09 17:14:03.735512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.074 ms
00:24:40.777  [2024-12-09 17:14:03.735518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.777  [2024-12-09 17:14:03.747208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.747329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:40.777  [2024-12-09 17:14:03.747343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.643 ms
00:24:40.777  [2024-12-09 17:14:03.747356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.777  [2024-12-09 17:14:03.764937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.764971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:40.777  [2024-12-09 17:14:03.764982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.565 ms
00:24:40.777  [2024-12-09 17:14:03.764989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.777  [2024-12-09 17:14:03.769771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.769795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:40.777  [2024-12-09 17:14:03.769804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.759 ms
00:24:40.777  [2024-12-09 17:14:03.769816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.777  [2024-12-09 17:14:03.789517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.789629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:40.777  [2024-12-09 17:14:03.789642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.651 ms
00:24:40.777  [2024-12-09 17:14:03.789648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.777  [2024-12-09 17:14:03.801757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.777  [2024-12-09 17:14:03.801785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:40.777  [2024-12-09 17:14:03.801795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.085 ms
00:24:40.777  [2024-12-09 17:14:03.801802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.039  [2024-12-09 17:14:04.047931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.039  [2024-12-09 17:14:04.048042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:41.039  [2024-12-09 17:14:04.048056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 246.101 ms
00:24:41.039  [2024-12-09 17:14:04.048063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.039  [2024-12-09 17:14:04.066912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.039  [2024-12-09 17:14:04.067013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:41.039  [2024-12-09 17:14:04.067025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.837 ms
00:24:41.039  [2024-12-09 17:14:04.067031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.302  [2024-12-09 17:14:04.085295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.302  [2024-12-09 17:14:04.085318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:41.302  [2024-12-09 17:14:04.085326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.242 ms
00:24:41.302  [2024-12-09 17:14:04.085332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.302  [2024-12-09 17:14:04.103261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.302  [2024-12-09 17:14:04.103284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:41.302  [2024-12-09 17:14:04.103292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.904 ms
00:24:41.302  [2024-12-09 17:14:04.103297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.302  [2024-12-09 17:14:04.120897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.302  [2024-12-09 17:14:04.120997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:41.302  [2024-12-09 17:14:04.121009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.558 ms
00:24:41.302  [2024-12-09 17:14:04.121015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.302  [2024-12-09 17:14:04.121036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:41.302  [2024-12-09 17:14:04.121047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:    83456 / 261120 	wr_cnt: 1	state: open
00:24:41.302  [2024-12-09 17:14:04.121056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.302  [2024-12-09 17:14:04.121609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:41.303  [2024-12-09 17:14:04.121759] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:41.303  [2024-12-09 17:14:04.121766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         ef202dc6-70d9-47d6-9bf2-4a23092fd7e2
00:24:41.303  [2024-12-09 17:14:04.121772] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    83456
00:24:41.303  [2024-12-09 17:14:04.121778] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        84416
00:24:41.303  [2024-12-09 17:14:04.121783] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         83456
00:24:41.303  [2024-12-09 17:14:04.121789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0115
00:24:41.303  [2024-12-09 17:14:04.121801] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:41.303  [2024-12-09 17:14:04.121808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:41.303  [2024-12-09 17:14:04.121814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:41.303  [2024-12-09 17:14:04.121819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:41.303  [2024-12-09 17:14:04.121824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:41.303  [2024-12-09 17:14:04.121829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.303  [2024-12-09 17:14:04.121835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:41.303  [2024-12-09 17:14:04.121842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.794 ms
00:24:41.303  [2024-12-09 17:14:04.121862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.131863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.303  [2024-12-09 17:14:04.131960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:41.303  [2024-12-09 17:14:04.131975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.989 ms
00:24:41.303  [2024-12-09 17:14:04.131981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.132265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:41.303  [2024-12-09 17:14:04.132273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:41.303  [2024-12-09 17:14:04.132279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.270 ms
00:24:41.303  [2024-12-09 17:14:04.132285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.159398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.159500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:41.303  [2024-12-09 17:14:04.159513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.159520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.159560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.159567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:41.303  [2024-12-09 17:14:04.159574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.159580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.159627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.159639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:41.303  [2024-12-09 17:14:04.159646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.159653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.159665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.159671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:41.303  [2024-12-09 17:14:04.159677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.159683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.222907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.222946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:41.303  [2024-12-09 17:14:04.222955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.222962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.274709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.274878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:41.303  [2024-12-09 17:14:04.274893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.274901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.274969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.274977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:41.303  [2024-12-09 17:14:04.274984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.274993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.275022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.275030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:41.303  [2024-12-09 17:14:04.275037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.275044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.275120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.275130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:41.303  [2024-12-09 17:14:04.275137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.275146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.275170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.275178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:41.303  [2024-12-09 17:14:04.275184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.275190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.275226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.275233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:41.303  [2024-12-09 17:14:04.275240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.275246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.275287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.303  [2024-12-09 17:14:04.275296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:41.303  [2024-12-09 17:14:04.275303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.303  [2024-12-09 17:14:04.275309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.303  [2024-12-09 17:14:04.275415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.913 ms, result 0
00:24:42.694  
00:24:42.694  
00:24:42.694   17:14:05 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144
00:24:42.694  [2024-12-09 17:14:05.648182] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:24:42.694  [2024-12-09 17:14:05.648459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81145 ]
00:24:42.956  [2024-12-09 17:14:05.805937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:42.956  [2024-12-09 17:14:05.891749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:43.217  [2024-12-09 17:14:06.123503] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:43.217  [2024-12-09 17:14:06.123695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:43.481  [2024-12-09 17:14:06.276536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.276578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:43.481  [2024-12-09 17:14:06.276591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:24:43.481  [2024-12-09 17:14:06.276597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.276635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.276645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:43.481  [2024-12-09 17:14:06.276652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:24:43.481  [2024-12-09 17:14:06.276658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.276671] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:43.481  [2024-12-09 17:14:06.277198] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:43.481  [2024-12-09 17:14:06.277211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.277217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:43.481  [2024-12-09 17:14:06.277224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.543 ms
00:24:43.481  [2024-12-09 17:14:06.277230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.278486] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:24:43.481  [2024-12-09 17:14:06.288839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.288871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:24:43.481  [2024-12-09 17:14:06.288882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.354 ms
00:24:43.481  [2024-12-09 17:14:06.288888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.288937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.288945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:24:43.481  [2024-12-09 17:14:06.288951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:24:43.481  [2024-12-09 17:14:06.288957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.295136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.295161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:43.481  [2024-12-09 17:14:06.295169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.131 ms
00:24:43.481  [2024-12-09 17:14:06.295178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.295234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.295241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:43.481  [2024-12-09 17:14:06.295247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.042 ms
00:24:43.481  [2024-12-09 17:14:06.295254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.295287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.295295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:43.481  [2024-12-09 17:14:06.295301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:43.481  [2024-12-09 17:14:06.295307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.295325] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:43.481  [2024-12-09 17:14:06.298316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.298340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:43.481  [2024-12-09 17:14:06.298349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.995 ms
00:24:43.481  [2024-12-09 17:14:06.298355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.298386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.298393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:43.481  [2024-12-09 17:14:06.298400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:24:43.481  [2024-12-09 17:14:06.298406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.298421] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:24:43.481  [2024-12-09 17:14:06.298438] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:24:43.481  [2024-12-09 17:14:06.298467] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:24:43.481  [2024-12-09 17:14:06.298482] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:24:43.481  [2024-12-09 17:14:06.298565] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:43.481  [2024-12-09 17:14:06.298574] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:43.481  [2024-12-09 17:14:06.298582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:43.481  [2024-12-09 17:14:06.298591] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:43.481  [2024-12-09 17:14:06.298598] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:43.481  [2024-12-09 17:14:06.298605] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:24:43.481  [2024-12-09 17:14:06.298611] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:43.481  [2024-12-09 17:14:06.298619] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:43.481  [2024-12-09 17:14:06.298625] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:43.481  [2024-12-09 17:14:06.298631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.298637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:43.481  [2024-12-09 17:14:06.298642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.212 ms
00:24:43.481  [2024-12-09 17:14:06.298648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.481  [2024-12-09 17:14:06.298712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.481  [2024-12-09 17:14:06.298718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:43.481  [2024-12-09 17:14:06.298724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:24:43.482  [2024-12-09 17:14:06.298729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.298807] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:43.482  [2024-12-09 17:14:06.298816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:43.482  [2024-12-09 17:14:06.298822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:43.482  [2024-12-09 17:14:06.298828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:43.482  [2024-12-09 17:14:06.298840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:24:43.482  [2024-12-09 17:14:06.298865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:43.482  [2024-12-09 17:14:06.298870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:43.482  [2024-12-09 17:14:06.298881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:43.482  [2024-12-09 17:14:06.298886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:24:43.482  [2024-12-09 17:14:06.298891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:43.482  [2024-12-09 17:14:06.298901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:43.482  [2024-12-09 17:14:06.298909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:24:43.482  [2024-12-09 17:14:06.298915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:43.482  [2024-12-09 17:14:06.298926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:24:43.482  [2024-12-09 17:14:06.298931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:43.482  [2024-12-09 17:14:06.298942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:43.482  [2024-12-09 17:14:06.298952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:43.482  [2024-12-09 17:14:06.298958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:43.482  [2024-12-09 17:14:06.298969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:43.482  [2024-12-09 17:14:06.298974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:43.482  [2024-12-09 17:14:06.298984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:43.482  [2024-12-09 17:14:06.298989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:24:43.482  [2024-12-09 17:14:06.298995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:43.482  [2024-12-09 17:14:06.299000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:43.482  [2024-12-09 17:14:06.299011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:24:43.482  [2024-12-09 17:14:06.299016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:43.482  [2024-12-09 17:14:06.299021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:43.482  [2024-12-09 17:14:06.299026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:24:43.482  [2024-12-09 17:14:06.299031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:43.482  [2024-12-09 17:14:06.299036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:43.482  [2024-12-09 17:14:06.299041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:24:43.482  [2024-12-09 17:14:06.299046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.299051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:43.482  [2024-12-09 17:14:06.299056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:24:43.482  [2024-12-09 17:14:06.299060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.299065] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:43.482  [2024-12-09 17:14:06.299071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:43.482  [2024-12-09 17:14:06.299077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:43.482  [2024-12-09 17:14:06.299084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:43.482  [2024-12-09 17:14:06.299091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:43.482  [2024-12-09 17:14:06.299097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:43.482  [2024-12-09 17:14:06.299102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:43.482  [2024-12-09 17:14:06.299107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:43.482  [2024-12-09 17:14:06.299113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:43.482  [2024-12-09 17:14:06.299118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:43.482  [2024-12-09 17:14:06.299125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:43.482  [2024-12-09 17:14:06.299132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:24:43.482  [2024-12-09 17:14:06.299148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:24:43.482  [2024-12-09 17:14:06.299153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:24:43.482  [2024-12-09 17:14:06.299158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:24:43.482  [2024-12-09 17:14:06.299164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:24:43.482  [2024-12-09 17:14:06.299169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:24:43.482  [2024-12-09 17:14:06.299175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:24:43.482  [2024-12-09 17:14:06.299181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:24:43.482  [2024-12-09 17:14:06.299186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:24:43.482  [2024-12-09 17:14:06.299192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:24:43.482  [2024-12-09 17:14:06.299220] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:43.482  [2024-12-09 17:14:06.299226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:43.482  [2024-12-09 17:14:06.299238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:43.482  [2024-12-09 17:14:06.299243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:43.482  [2024-12-09 17:14:06.299249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:43.482  [2024-12-09 17:14:06.299255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.299261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:43.482  [2024-12-09 17:14:06.299266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.501 ms
00:24:43.482  [2024-12-09 17:14:06.299273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.323315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.323346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:43.482  [2024-12-09 17:14:06.323355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.994 ms
00:24:43.482  [2024-12-09 17:14:06.323364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.323432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.323440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:43.482  [2024-12-09 17:14:06.323446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.049 ms
00:24:43.482  [2024-12-09 17:14:06.323452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.361983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.362017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:43.482  [2024-12-09 17:14:06.362028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.489 ms
00:24:43.482  [2024-12-09 17:14:06.362036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.362070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.362079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:43.482  [2024-12-09 17:14:06.362088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:24:43.482  [2024-12-09 17:14:06.362095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.362499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.362517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:43.482  [2024-12-09 17:14:06.362525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.363 ms
00:24:43.482  [2024-12-09 17:14:06.362531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.482  [2024-12-09 17:14:06.362643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.482  [2024-12-09 17:14:06.362655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:43.482  [2024-12-09 17:14:06.362662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.095 ms
00:24:43.483  [2024-12-09 17:14:06.362672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.374544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.374573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:43.483  [2024-12-09 17:14:06.374584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.856 ms
00:24:43.483  [2024-12-09 17:14:06.374590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.385049] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0
00:24:43.483  [2024-12-09 17:14:06.385161] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:24:43.483  [2024-12-09 17:14:06.385174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.385181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:24:43.483  [2024-12-09 17:14:06.385188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.492 ms
00:24:43.483  [2024-12-09 17:14:06.385195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.403665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.403694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:24:43.483  [2024-12-09 17:14:06.403703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.442 ms
00:24:43.483  [2024-12-09 17:14:06.403710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.412599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.412625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:24:43.483  [2024-12-09 17:14:06.412632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.858 ms
00:24:43.483  [2024-12-09 17:14:06.412639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.421132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.421156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:24:43.483  [2024-12-09 17:14:06.421164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.468 ms
00:24:43.483  [2024-12-09 17:14:06.421170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.421629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.421649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:43.483  [2024-12-09 17:14:06.421659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.403 ms
00:24:43.483  [2024-12-09 17:14:06.421665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.469261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.469303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:24:43.483  [2024-12-09 17:14:06.469319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.580 ms
00:24:43.483  [2024-12-09 17:14:06.469326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.477290] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:24:43.483  [2024-12-09 17:14:06.479231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.479342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:43.483  [2024-12-09 17:14:06.479356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.868 ms
00:24:43.483  [2024-12-09 17:14:06.479364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.479426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.479435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:24:43.483  [2024-12-09 17:14:06.479446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:24:43.483  [2024-12-09 17:14:06.479452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.480649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.480677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:43.483  [2024-12-09 17:14:06.480685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.150 ms
00:24:43.483  [2024-12-09 17:14:06.480691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.480712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.480719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:43.483  [2024-12-09 17:14:06.480726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:43.483  [2024-12-09 17:14:06.480733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.480766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:24:43.483  [2024-12-09 17:14:06.480775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.480781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:24:43.483  [2024-12-09 17:14:06.480788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:24:43.483  [2024-12-09 17:14:06.480794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.499427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.499453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:43.483  [2024-12-09 17:14:06.499465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.617 ms
00:24:43.483  [2024-12-09 17:14:06.499471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.499528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:43.483  [2024-12-09 17:14:06.499536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:43.483  [2024-12-09 17:14:06.499543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:24:43.483  [2024-12-09 17:14:06.499549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:43.483  [2024-12-09 17:14:06.500448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.513 ms, result 0
00:24:44.873  
[2024-12-09T17:14:08.858Z] Copying: 11/1024 [MB] (11 MBps)
[2024-12-09T17:14:09.803Z] Copying: 21/1024 [MB] (10 MBps)
[2024-12-09T17:14:10.749Z] Copying: 37/1024 [MB] (16 MBps)
[2024-12-09T17:14:11.694Z] Copying: 50/1024 [MB] (13 MBps)
[2024-12-09T17:14:13.082Z] Copying: 61848/1048576 [kB] (9704 kBps)
[2024-12-09T17:14:13.654Z] Copying: 70/1024 [MB] (10 MBps)
[2024-12-09T17:14:15.043Z] Copying: 82/1024 [MB] (12 MBps)
[2024-12-09T17:14:15.997Z] Copying: 94/1024 [MB] (11 MBps)
[2024-12-09T17:14:16.983Z] Copying: 115/1024 [MB] (21 MBps)
[2024-12-09T17:14:17.927Z] Copying: 126/1024 [MB] (10 MBps)
[2024-12-09T17:14:18.870Z] Copying: 137/1024 [MB] (11 MBps)
[2024-12-09T17:14:19.814Z] Copying: 158/1024 [MB] (20 MBps)
[2024-12-09T17:14:20.856Z] Copying: 173/1024 [MB] (15 MBps)
[2024-12-09T17:14:21.799Z] Copying: 186/1024 [MB] (12 MBps)
[2024-12-09T17:14:22.744Z] Copying: 197/1024 [MB] (11 MBps)
[2024-12-09T17:14:23.691Z] Copying: 209/1024 [MB] (11 MBps)
[2024-12-09T17:14:25.080Z] Copying: 221/1024 [MB] (11 MBps)
[2024-12-09T17:14:25.651Z] Copying: 231/1024 [MB] (10 MBps)
[2024-12-09T17:14:27.040Z] Copying: 244/1024 [MB] (12 MBps)
[2024-12-09T17:14:27.985Z] Copying: 256/1024 [MB] (11 MBps)
[2024-12-09T17:14:28.929Z] Copying: 276/1024 [MB] (20 MBps)
[2024-12-09T17:14:29.875Z] Copying: 289/1024 [MB] (13 MBps)
[2024-12-09T17:14:30.819Z] Copying: 304/1024 [MB] (15 MBps)
[2024-12-09T17:14:31.764Z] Copying: 316/1024 [MB] (11 MBps)
[2024-12-09T17:14:32.706Z] Copying: 329/1024 [MB] (13 MBps)
[2024-12-09T17:14:33.650Z] Copying: 342/1024 [MB] (13 MBps)
[2024-12-09T17:14:35.038Z] Copying: 356/1024 [MB] (13 MBps)
[2024-12-09T17:14:35.980Z] Copying: 367/1024 [MB] (10 MBps)
[2024-12-09T17:14:36.923Z] Copying: 379/1024 [MB] (11 MBps)
[2024-12-09T17:14:37.868Z] Copying: 401/1024 [MB] (22 MBps)
[2024-12-09T17:14:38.812Z] Copying: 411/1024 [MB] (10 MBps)
[2024-12-09T17:14:39.759Z] Copying: 422/1024 [MB] (11 MBps)
[2024-12-09T17:14:40.704Z] Copying: 437/1024 [MB] (14 MBps)
[2024-12-09T17:14:41.651Z] Copying: 449/1024 [MB] (12 MBps)
[2024-12-09T17:14:43.094Z] Copying: 470340/1048576 [kB] (9792 kBps)
[2024-12-09T17:14:43.666Z] Copying: 470/1024 [MB] (11 MBps)
[2024-12-09T17:14:45.054Z] Copying: 486/1024 [MB] (15 MBps)
[2024-12-09T17:14:45.996Z] Copying: 498/1024 [MB] (11 MBps)
[2024-12-09T17:14:46.943Z] Copying: 509/1024 [MB] (11 MBps)
[2024-12-09T17:14:47.888Z] Copying: 520/1024 [MB] (10 MBps)
[2024-12-09T17:14:48.832Z] Copying: 530/1024 [MB] (10 MBps)
[2024-12-09T17:14:49.776Z] Copying: 540/1024 [MB] (10 MBps)
[2024-12-09T17:14:50.719Z] Copying: 551/1024 [MB] (10 MBps)
[2024-12-09T17:14:51.664Z] Copying: 563/1024 [MB] (11 MBps)
[2024-12-09T17:14:53.052Z] Copying: 575/1024 [MB] (12 MBps)
[2024-12-09T17:14:53.996Z] Copying: 588/1024 [MB] (13 MBps)
[2024-12-09T17:14:54.940Z] Copying: 604/1024 [MB] (15 MBps)
[2024-12-09T17:14:55.915Z] Copying: 618/1024 [MB] (14 MBps)
[2024-12-09T17:14:56.860Z] Copying: 631/1024 [MB] (12 MBps)
[2024-12-09T17:14:57.805Z] Copying: 649/1024 [MB] (18 MBps)
[2024-12-09T17:14:58.752Z] Copying: 660/1024 [MB] (10 MBps)
[2024-12-09T17:14:59.697Z] Copying: 672/1024 [MB] (12 MBps)
[2024-12-09T17:15:01.081Z] Copying: 683/1024 [MB] (10 MBps)
[2024-12-09T17:15:01.649Z] Copying: 695/1024 [MB] (12 MBps)
[2024-12-09T17:15:03.035Z] Copying: 707/1024 [MB] (11 MBps)
[2024-12-09T17:15:03.981Z] Copying: 721/1024 [MB] (13 MBps)
[2024-12-09T17:15:04.925Z] Copying: 731/1024 [MB] (10 MBps)
[2024-12-09T17:15:05.868Z] Copying: 759056/1048576 [kB] (9504 kBps)
[2024-12-09T17:15:06.811Z] Copying: 753/1024 [MB] (11 MBps)
[2024-12-09T17:15:07.753Z] Copying: 764/1024 [MB] (10 MBps)
[2024-12-09T17:15:08.695Z] Copying: 775/1024 [MB] (11 MBps)
[2024-12-09T17:15:10.080Z] Copying: 790/1024 [MB] (14 MBps)
[2024-12-09T17:15:10.651Z] Copying: 819232/1048576 [kB] (10164 kBps)
[2024-12-09T17:15:12.039Z] Copying: 810/1024 [MB] (10 MBps)
[2024-12-09T17:15:13.042Z] Copying: 821/1024 [MB] (10 MBps)
[2024-12-09T17:15:13.986Z] Copying: 831/1024 [MB] (10 MBps)
[2024-12-09T17:15:14.932Z] Copying: 861132/1048576 [kB] (9332 kBps)
[2024-12-09T17:15:15.876Z] Copying: 851/1024 [MB] (10 MBps)
[2024-12-09T17:15:16.819Z] Copying: 864/1024 [MB] (12 MBps)
[2024-12-09T17:15:17.763Z] Copying: 874/1024 [MB] (10 MBps)
[2024-12-09T17:15:18.704Z] Copying: 888/1024 [MB] (13 MBps)
[2024-12-09T17:15:20.089Z] Copying: 908/1024 [MB] (20 MBps)
[2024-12-09T17:15:20.660Z] Copying: 922/1024 [MB] (14 MBps)
[2024-12-09T17:15:22.049Z] Copying: 934/1024 [MB] (11 MBps)
[2024-12-09T17:15:22.995Z] Copying: 946/1024 [MB] (12 MBps)
[2024-12-09T17:15:23.941Z] Copying: 960/1024 [MB] (13 MBps)
[2024-12-09T17:15:24.885Z] Copying: 993872/1048576 [kB] (10232 kBps)
[2024-12-09T17:15:25.828Z] Copying: 991/1024 [MB] (20 MBps)
[2024-12-09T17:15:26.774Z] Copying: 1008/1024 [MB] (17 MBps)
[2024-12-09T17:15:26.774Z] Copying: 1023/1024 [MB] (14 MBps)
[2024-12-09T17:15:27.347Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-09 17:15:27.063898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.064037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:26:04.306  [2024-12-09 17:15:27.064095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:26:04.306  [2024-12-09 17:15:27.064121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.306  [2024-12-09 17:15:27.064187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:26:04.306  [2024-12-09 17:15:27.067834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.067896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:26:04.306  [2024-12-09 17:15:27.067911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.604 ms
00:26:04.306  [2024-12-09 17:15:27.067920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.306  [2024-12-09 17:15:27.068184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.068196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:26:04.306  [2024-12-09 17:15:27.068206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.227 ms
00:26:04.306  [2024-12-09 17:15:27.068223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.306  [2024-12-09 17:15:27.075868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.075922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:26:04.306  [2024-12-09 17:15:27.075935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.625 ms
00:26:04.306  [2024-12-09 17:15:27.075944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.306  [2024-12-09 17:15:27.082235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.082277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:26:04.306  [2024-12-09 17:15:27.082291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.243 ms
00:26:04.306  [2024-12-09 17:15:27.082309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.306  [2024-12-09 17:15:27.110642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.110698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:26:04.306  [2024-12-09 17:15:27.110715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.280 ms
00:26:04.306  [2024-12-09 17:15:27.110725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.306  [2024-12-09 17:15:27.128310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.306  [2024-12-09 17:15:27.128364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:26:04.306  [2024-12-09 17:15:27.128389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.529 ms
00:26:04.306  [2024-12-09 17:15:27.128400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.568  [2024-12-09 17:15:27.412830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.568  [2024-12-09 17:15:27.412921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:26:04.568  [2024-12-09 17:15:27.412948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 284.367 ms
00:26:04.568  [2024-12-09 17:15:27.412960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.568  [2024-12-09 17:15:27.441604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.568  [2024-12-09 17:15:27.441661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:26:04.568  [2024-12-09 17:15:27.441677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.625 ms
00:26:04.568  [2024-12-09 17:15:27.441685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.568  [2024-12-09 17:15:27.467928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.568  [2024-12-09 17:15:27.467980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:26:04.568  [2024-12-09 17:15:27.467994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.190 ms
00:26:04.568  [2024-12-09 17:15:27.468002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.568  [2024-12-09 17:15:27.494013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.568  [2024-12-09 17:15:27.494068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:26:04.568  [2024-12-09 17:15:27.494082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.959 ms
00:26:04.568  [2024-12-09 17:15:27.494090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.568  [2024-12-09 17:15:27.519834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.568  [2024-12-09 17:15:27.519904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:26:04.568  [2024-12-09 17:15:27.519917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.645 ms
00:26:04.568  [2024-12-09 17:15:27.519925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.568  [2024-12-09 17:15:27.519974] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:26:04.568  [2024-12-09 17:15:27.519992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   131072 / 261120 	wr_cnt: 1	state: open
00:26:04.568  [2024-12-09 17:15:27.520005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.568  [2024-12-09 17:15:27.520014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.568  [2024-12-09 17:15:27.520024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.568  [2024-12-09 17:15:27.520032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.568  [2024-12-09 17:15:27.520042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.569  [2024-12-09 17:15:27.520890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.570  [2024-12-09 17:15:27.520899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.570  [2024-12-09 17:15:27.520907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.570  [2024-12-09 17:15:27.520916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:26:04.570  [2024-12-09 17:15:27.520934] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:26:04.570  [2024-12-09 17:15:27.520944] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         ef202dc6-70d9-47d6-9bf2-4a23092fd7e2
00:26:04.570  [2024-12-09 17:15:27.520954] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    131072
00:26:04.570  [2024-12-09 17:15:27.520963] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        48576
00:26:04.570  [2024-12-09 17:15:27.520971] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         47616
00:26:04.570  [2024-12-09 17:15:27.520982] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0202
00:26:04.570  [2024-12-09 17:15:27.520997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:26:04.570  [2024-12-09 17:15:27.521015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:26:04.570  [2024-12-09 17:15:27.521024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:26:04.570  [2024-12-09 17:15:27.521032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:26:04.570  [2024-12-09 17:15:27.521040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:26:04.570  [2024-12-09 17:15:27.521050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.570  [2024-12-09 17:15:27.521059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:26:04.570  [2024-12-09 17:15:27.521068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.077 ms
00:26:04.570  [2024-12-09 17:15:27.521077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.570  [2024-12-09 17:15:27.536247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.570  [2024-12-09 17:15:27.536296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:26:04.570  [2024-12-09 17:15:27.536315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.146 ms
00:26:04.570  [2024-12-09 17:15:27.536324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.570  [2024-12-09 17:15:27.536776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:04.570  [2024-12-09 17:15:27.536787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:26:04.570  [2024-12-09 17:15:27.536797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.414 ms
00:26:04.570  [2024-12-09 17:15:27.536805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.570  [2024-12-09 17:15:27.576920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.570  [2024-12-09 17:15:27.576983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:26:04.570  [2024-12-09 17:15:27.576997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.570  [2024-12-09 17:15:27.577006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.570  [2024-12-09 17:15:27.577081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.570  [2024-12-09 17:15:27.577090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:26:04.570  [2024-12-09 17:15:27.577099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.570  [2024-12-09 17:15:27.577107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.570  [2024-12-09 17:15:27.577205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.570  [2024-12-09 17:15:27.577218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:26:04.570  [2024-12-09 17:15:27.577234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.570  [2024-12-09 17:15:27.577243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.570  [2024-12-09 17:15:27.577260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.570  [2024-12-09 17:15:27.577269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:26:04.570  [2024-12-09 17:15:27.577278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.570  [2024-12-09 17:15:27.577286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.669253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.669335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:26:04.831  [2024-12-09 17:15:27.669351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.669360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.743201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.743468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:26:04.831  [2024-12-09 17:15:27.743491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.743502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.743585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.743596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:26:04.831  [2024-12-09 17:15:27.743607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.743625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.743698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.743710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:26:04.831  [2024-12-09 17:15:27.743719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.743729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.743880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.743893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:26:04.831  [2024-12-09 17:15:27.743902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.743911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.743957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.743969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:26:04.831  [2024-12-09 17:15:27.743979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.743989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.744042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.744053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:26:04.831  [2024-12-09 17:15:27.744062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.744072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.744134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:04.831  [2024-12-09 17:15:27.744147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:26:04.831  [2024-12-09 17:15:27.744156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:04.831  [2024-12-09 17:15:27.744166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:04.831  [2024-12-09 17:15:27.744333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 680.460 ms, result 0
00:26:05.815  
00:26:05.815  
00:26:05.815   17:15:28 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:26:08.362  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:26:08.362  Process with pid 78710 is not found
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78710
00:26:08.362   17:15:30 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78710 ']'
00:26:08.362   17:15:30 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78710
00:26:08.362  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78710) - No such process
00:26:08.362   17:15:30 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78710 is not found'
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm
00:26:08.362  Remove shared memory files
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:26:08.362   17:15:30 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f
00:26:08.362  ************************************
00:26:08.362  END TEST ftl_restore
00:26:08.362  ************************************
00:26:08.362  
00:26:08.362  real	5m18.427s
00:26:08.362  user	5m6.645s
00:26:08.362  sys	0m11.715s
00:26:08.362   17:15:30 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:08.362   17:15:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x
00:26:08.362   17:15:30 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0
00:26:08.362   17:15:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:26:08.362   17:15:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:08.362   17:15:30 ftl -- common/autotest_common.sh@10 -- # set +x
00:26:08.362  ************************************
00:26:08.362  START TEST ftl_dirty_shutdown
00:26:08.362  ************************************
00:26:08.362   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0
00:26:08.362  * Looking for test storage...
00:26:08.362  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:26:08.362    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:08.362     17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:08.363  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:08.363  		--rc genhtml_branch_coverage=1
00:26:08.363  		--rc genhtml_function_coverage=1
00:26:08.363  		--rc genhtml_legend=1
00:26:08.363  		--rc geninfo_all_blocks=1
00:26:08.363  		--rc geninfo_unexecuted_blocks=1
00:26:08.363  		
00:26:08.363  		'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:08.363  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:08.363  		--rc genhtml_branch_coverage=1
00:26:08.363  		--rc genhtml_function_coverage=1
00:26:08.363  		--rc genhtml_legend=1
00:26:08.363  		--rc geninfo_all_blocks=1
00:26:08.363  		--rc geninfo_unexecuted_blocks=1
00:26:08.363  		
00:26:08.363  		'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:08.363  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:08.363  		--rc genhtml_branch_coverage=1
00:26:08.363  		--rc genhtml_function_coverage=1
00:26:08.363  		--rc genhtml_legend=1
00:26:08.363  		--rc geninfo_all_blocks=1
00:26:08.363  		--rc geninfo_unexecuted_blocks=1
00:26:08.363  		
00:26:08.363  		'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:08.363  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:08.363  		--rc genhtml_branch_coverage=1
00:26:08.363  		--rc genhtml_function_coverage=1
00:26:08.363  		--rc genhtml_legend=1
00:26:08.363  		--rc geninfo_all_blocks=1
00:26:08.363  		--rc geninfo_unexecuted_blocks=1
00:26:08.363  		
00:26:08.363  		'
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:26:08.363      17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:26:08.363     17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid=
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:08.363    17:15:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82082
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82082
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82082 ']'
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:08.363  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:08.363   17:15:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x
00:26:08.363  [2024-12-09 17:15:31.253927] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:26:08.363  [2024-12-09 17:15:31.254238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82082 ]
00:26:08.624  [2024-12-09 17:15:31.410628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:08.624  [2024-12-09 17:15:31.500165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:09.197   17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:09.197   17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0
00:26:09.197    17:15:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:26:09.197    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0
00:26:09.197    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:26:09.197    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424
00:26:09.197    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev
00:26:09.197     17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:26:09.459    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:26:09.459    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size
00:26:09.459     17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:26:09.459     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:26:09.459     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:26:09.459     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:26:09.459     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:26:09.459      17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:26:09.720    {
00:26:09.720      "name": "nvme0n1",
00:26:09.720      "aliases": [
00:26:09.720        "2842d8b3-bdd1-4809-893d-ca48968ef9b6"
00:26:09.720      ],
00:26:09.720      "product_name": "NVMe disk",
00:26:09.720      "block_size": 4096,
00:26:09.720      "num_blocks": 1310720,
00:26:09.720      "uuid": "2842d8b3-bdd1-4809-893d-ca48968ef9b6",
00:26:09.720      "numa_id": -1,
00:26:09.720      "assigned_rate_limits": {
00:26:09.720        "rw_ios_per_sec": 0,
00:26:09.720        "rw_mbytes_per_sec": 0,
00:26:09.720        "r_mbytes_per_sec": 0,
00:26:09.720        "w_mbytes_per_sec": 0
00:26:09.720      },
00:26:09.720      "claimed": true,
00:26:09.720      "claim_type": "read_many_write_one",
00:26:09.720      "zoned": false,
00:26:09.720      "supported_io_types": {
00:26:09.720        "read": true,
00:26:09.720        "write": true,
00:26:09.720        "unmap": true,
00:26:09.720        "flush": true,
00:26:09.720        "reset": true,
00:26:09.720        "nvme_admin": true,
00:26:09.720        "nvme_io": true,
00:26:09.720        "nvme_io_md": false,
00:26:09.720        "write_zeroes": true,
00:26:09.720        "zcopy": false,
00:26:09.720        "get_zone_info": false,
00:26:09.720        "zone_management": false,
00:26:09.720        "zone_append": false,
00:26:09.720        "compare": true,
00:26:09.720        "compare_and_write": false,
00:26:09.720        "abort": true,
00:26:09.720        "seek_hole": false,
00:26:09.720        "seek_data": false,
00:26:09.720        "copy": true,
00:26:09.720        "nvme_iov_md": false
00:26:09.720      },
00:26:09.720      "driver_specific": {
00:26:09.720        "nvme": [
00:26:09.720          {
00:26:09.720            "pci_address": "0000:00:11.0",
00:26:09.720            "trid": {
00:26:09.720              "trtype": "PCIe",
00:26:09.720              "traddr": "0000:00:11.0"
00:26:09.720            },
00:26:09.720            "ctrlr_data": {
00:26:09.720              "cntlid": 0,
00:26:09.720              "vendor_id": "0x1b36",
00:26:09.720              "model_number": "QEMU NVMe Ctrl",
00:26:09.720              "serial_number": "12341",
00:26:09.720              "firmware_revision": "8.0.0",
00:26:09.720              "subnqn": "nqn.2019-08.org.qemu:12341",
00:26:09.720              "oacs": {
00:26:09.720                "security": 0,
00:26:09.720                "format": 1,
00:26:09.720                "firmware": 0,
00:26:09.720                "ns_manage": 1
00:26:09.720              },
00:26:09.720              "multi_ctrlr": false,
00:26:09.720              "ana_reporting": false
00:26:09.720            },
00:26:09.720            "vs": {
00:26:09.720              "nvme_version": "1.4"
00:26:09.720            },
00:26:09.720            "ns_data": {
00:26:09.720              "id": 1,
00:26:09.720              "can_share": false
00:26:09.720            }
00:26:09.720          }
00:26:09.720        ],
00:26:09.720        "mp_policy": "active_passive"
00:26:09.720      }
00:26:09.720    }
00:26:09.720  ]'
00:26:09.720      17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:26:09.720      17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120
00:26:09.720    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120
00:26:09.720    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:26:09.720    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:26:09.720     17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:26:09.982    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=145277a5-7295-4eb9-bce6-dd6f22e7caa9
00:26:09.982    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores
00:26:09.982    17:15:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 145277a5-7295-4eb9-bce6-dd6f22e7caa9
00:26:10.244     17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a59fdfcc-7132-44d4-92eb-d71f831eb4a4
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a59fdfcc-7132-44d4-92eb-d71f831eb4a4
00:26:10.505   17:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:10.505   17:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']'
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:10.505    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size=
00:26:10.505     17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:10.505     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:10.505     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:26:10.505     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:26:10.505     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:26:10.505      17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:10.767     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:26:10.767    {
00:26:10.767      "name": "62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0",
00:26:10.767      "aliases": [
00:26:10.767        "lvs/nvme0n1p0"
00:26:10.767      ],
00:26:10.767      "product_name": "Logical Volume",
00:26:10.767      "block_size": 4096,
00:26:10.767      "num_blocks": 26476544,
00:26:10.767      "uuid": "62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0",
00:26:10.767      "assigned_rate_limits": {
00:26:10.767        "rw_ios_per_sec": 0,
00:26:10.767        "rw_mbytes_per_sec": 0,
00:26:10.767        "r_mbytes_per_sec": 0,
00:26:10.767        "w_mbytes_per_sec": 0
00:26:10.767      },
00:26:10.767      "claimed": false,
00:26:10.767      "zoned": false,
00:26:10.767      "supported_io_types": {
00:26:10.767        "read": true,
00:26:10.767        "write": true,
00:26:10.767        "unmap": true,
00:26:10.767        "flush": false,
00:26:10.767        "reset": true,
00:26:10.767        "nvme_admin": false,
00:26:10.767        "nvme_io": false,
00:26:10.767        "nvme_io_md": false,
00:26:10.767        "write_zeroes": true,
00:26:10.767        "zcopy": false,
00:26:10.767        "get_zone_info": false,
00:26:10.767        "zone_management": false,
00:26:10.767        "zone_append": false,
00:26:10.767        "compare": false,
00:26:10.767        "compare_and_write": false,
00:26:10.767        "abort": false,
00:26:10.767        "seek_hole": true,
00:26:10.767        "seek_data": true,
00:26:10.767        "copy": false,
00:26:10.767        "nvme_iov_md": false
00:26:10.767      },
00:26:10.767      "driver_specific": {
00:26:10.767        "lvol": {
00:26:10.767          "lvol_store_uuid": "a59fdfcc-7132-44d4-92eb-d71f831eb4a4",
00:26:10.767          "base_bdev": "nvme0n1",
00:26:10.767          "thin_provision": true,
00:26:10.767          "num_allocated_clusters": 0,
00:26:10.767          "snapshot": false,
00:26:10.767          "clone": false,
00:26:10.767          "esnap_clone": false
00:26:10.767        }
00:26:10.767      }
00:26:10.767    }
00:26:10.767  ]'
00:26:10.767      17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:26:10.767     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:26:10.767      17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:26:10.767     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:26:10.767     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:26:10.767     17:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:26:10.767    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171
00:26:10.767    17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev
00:26:10.767     17:15:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:26:11.028    17:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:26:11.028    17:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]]
00:26:11.028     17:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:11.028     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:11.028     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:26:11.028     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:26:11.028     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:26:11.028      17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:11.289     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:26:11.289    {
00:26:11.289      "name": "62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0",
00:26:11.289      "aliases": [
00:26:11.289        "lvs/nvme0n1p0"
00:26:11.289      ],
00:26:11.289      "product_name": "Logical Volume",
00:26:11.289      "block_size": 4096,
00:26:11.289      "num_blocks": 26476544,
00:26:11.289      "uuid": "62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0",
00:26:11.289      "assigned_rate_limits": {
00:26:11.289        "rw_ios_per_sec": 0,
00:26:11.289        "rw_mbytes_per_sec": 0,
00:26:11.289        "r_mbytes_per_sec": 0,
00:26:11.289        "w_mbytes_per_sec": 0
00:26:11.289      },
00:26:11.289      "claimed": false,
00:26:11.289      "zoned": false,
00:26:11.289      "supported_io_types": {
00:26:11.289        "read": true,
00:26:11.289        "write": true,
00:26:11.289        "unmap": true,
00:26:11.289        "flush": false,
00:26:11.289        "reset": true,
00:26:11.289        "nvme_admin": false,
00:26:11.289        "nvme_io": false,
00:26:11.289        "nvme_io_md": false,
00:26:11.289        "write_zeroes": true,
00:26:11.289        "zcopy": false,
00:26:11.289        "get_zone_info": false,
00:26:11.289        "zone_management": false,
00:26:11.289        "zone_append": false,
00:26:11.289        "compare": false,
00:26:11.289        "compare_and_write": false,
00:26:11.289        "abort": false,
00:26:11.289        "seek_hole": true,
00:26:11.289        "seek_data": true,
00:26:11.289        "copy": false,
00:26:11.289        "nvme_iov_md": false
00:26:11.289      },
00:26:11.289      "driver_specific": {
00:26:11.289        "lvol": {
00:26:11.289          "lvol_store_uuid": "a59fdfcc-7132-44d4-92eb-d71f831eb4a4",
00:26:11.289          "base_bdev": "nvme0n1",
00:26:11.289          "thin_provision": true,
00:26:11.289          "num_allocated_clusters": 0,
00:26:11.289          "snapshot": false,
00:26:11.289          "clone": false,
00:26:11.289          "esnap_clone": false
00:26:11.289        }
00:26:11.289      }
00:26:11.289    }
00:26:11.289  ]'
00:26:11.289      17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:26:11.289     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:26:11.289      17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:26:11.289     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:26:11.289     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:26:11.289     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:26:11.289    17:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171
00:26:11.289    17:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:26:11.550   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0
00:26:11.550    17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:11.550    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:11.550    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:26:11.550    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:26:11.550    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:26:11.550     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0
00:26:11.810    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:26:11.810    {
00:26:11.810      "name": "62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0",
00:26:11.810      "aliases": [
00:26:11.810        "lvs/nvme0n1p0"
00:26:11.810      ],
00:26:11.810      "product_name": "Logical Volume",
00:26:11.810      "block_size": 4096,
00:26:11.810      "num_blocks": 26476544,
00:26:11.810      "uuid": "62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0",
00:26:11.810      "assigned_rate_limits": {
00:26:11.810        "rw_ios_per_sec": 0,
00:26:11.810        "rw_mbytes_per_sec": 0,
00:26:11.810        "r_mbytes_per_sec": 0,
00:26:11.810        "w_mbytes_per_sec": 0
00:26:11.810      },
00:26:11.810      "claimed": false,
00:26:11.810      "zoned": false,
00:26:11.810      "supported_io_types": {
00:26:11.810        "read": true,
00:26:11.810        "write": true,
00:26:11.810        "unmap": true,
00:26:11.810        "flush": false,
00:26:11.810        "reset": true,
00:26:11.810        "nvme_admin": false,
00:26:11.810        "nvme_io": false,
00:26:11.810        "nvme_io_md": false,
00:26:11.810        "write_zeroes": true,
00:26:11.810        "zcopy": false,
00:26:11.810        "get_zone_info": false,
00:26:11.810        "zone_management": false,
00:26:11.810        "zone_append": false,
00:26:11.810        "compare": false,
00:26:11.810        "compare_and_write": false,
00:26:11.810        "abort": false,
00:26:11.810        "seek_hole": true,
00:26:11.810        "seek_data": true,
00:26:11.810        "copy": false,
00:26:11.810        "nvme_iov_md": false
00:26:11.810      },
00:26:11.810      "driver_specific": {
00:26:11.810        "lvol": {
00:26:11.810          "lvol_store_uuid": "a59fdfcc-7132-44d4-92eb-d71f831eb4a4",
00:26:11.810          "base_bdev": "nvme0n1",
00:26:11.810          "thin_provision": true,
00:26:11.810          "num_allocated_clusters": 0,
00:26:11.810          "snapshot": false,
00:26:11.810          "clone": false,
00:26:11.810          "esnap_clone": false
00:26:11.810        }
00:26:11.810      }
00:26:11.810    }
00:26:11.810  ]'
00:26:11.810     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:26:11.810    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:26:11.810     17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:26:11.810    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:26:11.810    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:26:11.810    17:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:26:11.810   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10
00:26:11.810   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0 --l2p_dram_limit 10'
00:26:11.810   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']'
00:26:11.810   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']'
00:26:11.810   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0'
00:26:11.810   17:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 62abd14e-d1c4-414e-a6e5-dc9e7c8e4eb0 --l2p_dram_limit 10 -c nvc0n1p0
00:26:12.072  [2024-12-09 17:15:34.910326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.072  [2024-12-09 17:15:34.910465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:26:12.072  [2024-12-09 17:15:34.910488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:26:12.072  [2024-12-09 17:15:34.910495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.072  [2024-12-09 17:15:34.910561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.072  [2024-12-09 17:15:34.910570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:26:12.072  [2024-12-09 17:15:34.910579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.046 ms
00:26:12.072  [2024-12-09 17:15:34.910586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.072  [2024-12-09 17:15:34.910608] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:26:12.072  [2024-12-09 17:15:34.911267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:26:12.072  [2024-12-09 17:15:34.911285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.072  [2024-12-09 17:15:34.911292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:26:12.072  [2024-12-09 17:15:34.911301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.684 ms
00:26:12.072  [2024-12-09 17:15:34.911307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.072  [2024-12-09 17:15:34.911333] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 733c0fa8-9e02-4a21-8416-115b8afc7a4a
00:26:12.072  [2024-12-09 17:15:34.912760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.072  [2024-12-09 17:15:34.912796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:26:12.072  [2024-12-09 17:15:34.912806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.025 ms
00:26:12.072  [2024-12-09 17:15:34.912815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.919789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.919822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:26:12.073  [2024-12-09 17:15:34.919830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.930 ms
00:26:12.073  [2024-12-09 17:15:34.919838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.919925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.919935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:26:12.073  [2024-12-09 17:15:34.919942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.058 ms
00:26:12.073  [2024-12-09 17:15:34.919952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.919995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.920005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:26:12.073  [2024-12-09 17:15:34.920014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:26:12.073  [2024-12-09 17:15:34.920025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.920045] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:26:12.073  [2024-12-09 17:15:34.923351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.923378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:26:12.073  [2024-12-09 17:15:34.923387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.310 ms
00:26:12.073  [2024-12-09 17:15:34.923393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.923425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.923432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:26:12.073  [2024-12-09 17:15:34.923440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:26:12.073  [2024-12-09 17:15:34.923446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.923467] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:26:12.073  [2024-12-09 17:15:34.923586] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:26:12.073  [2024-12-09 17:15:34.923599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:26:12.073  [2024-12-09 17:15:34.923608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:26:12.073  [2024-12-09 17:15:34.923617] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:26:12.073  [2024-12-09 17:15:34.923624] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:26:12.073  [2024-12-09 17:15:34.923632] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:26:12.073  [2024-12-09 17:15:34.923638] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:26:12.073  [2024-12-09 17:15:34.923649] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:26:12.073  [2024-12-09 17:15:34.923654] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:26:12.073  [2024-12-09 17:15:34.923662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.923673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:26:12.073  [2024-12-09 17:15:34.923681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.197 ms
00:26:12.073  [2024-12-09 17:15:34.923687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.923754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.073  [2024-12-09 17:15:34.923761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:26:12.073  [2024-12-09 17:15:34.923768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:26:12.073  [2024-12-09 17:15:34.923774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.073  [2024-12-09 17:15:34.923869] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:26:12.073  [2024-12-09 17:15:34.923878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:26:12.073  [2024-12-09 17:15:34.923887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:26:12.073  [2024-12-09 17:15:34.923892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.923900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:26:12.073  [2024-12-09 17:15:34.923905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.923912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:26:12.073  [2024-12-09 17:15:34.923917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:26:12.073  [2024-12-09 17:15:34.923925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:26:12.073  [2024-12-09 17:15:34.923930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:26:12.073  [2024-12-09 17:15:34.923937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:26:12.073  [2024-12-09 17:15:34.923942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:26:12.073  [2024-12-09 17:15:34.923949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:26:12.073  [2024-12-09 17:15:34.923955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:26:12.073  [2024-12-09 17:15:34.923961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:26:12.073  [2024-12-09 17:15:34.923966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.923975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:26:12.073  [2024-12-09 17:15:34.923980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:26:12.073  [2024-12-09 17:15:34.923986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.923997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:26:12.073  [2024-12-09 17:15:34.924004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:12.073  [2024-12-09 17:15:34.924015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:26:12.073  [2024-12-09 17:15:34.924021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:12.073  [2024-12-09 17:15:34.924034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:26:12.073  [2024-12-09 17:15:34.924041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:12.073  [2024-12-09 17:15:34.924052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:26:12.073  [2024-12-09 17:15:34.924057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:12.073  [2024-12-09 17:15:34.924078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:26:12.073  [2024-12-09 17:15:34.924086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:26:12.073  [2024-12-09 17:15:34.924102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:26:12.073  [2024-12-09 17:15:34.924107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:26:12.073  [2024-12-09 17:15:34.924113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:26:12.073  [2024-12-09 17:15:34.924118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:26:12.073  [2024-12-09 17:15:34.924125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:26:12.073  [2024-12-09 17:15:34.924132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:26:12.073  [2024-12-09 17:15:34.924144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:26:12.073  [2024-12-09 17:15:34.924152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924157] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:26:12.073  [2024-12-09 17:15:34.924165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:26:12.073  [2024-12-09 17:15:34.924170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:26:12.073  [2024-12-09 17:15:34.924177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:12.073  [2024-12-09 17:15:34.924186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:26:12.073  [2024-12-09 17:15:34.924195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:26:12.073  [2024-12-09 17:15:34.924201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:26:12.073  [2024-12-09 17:15:34.924208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:26:12.073  [2024-12-09 17:15:34.924212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:26:12.073  [2024-12-09 17:15:34.924219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:26:12.073  [2024-12-09 17:15:34.924225] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:26:12.073  [2024-12-09 17:15:34.924238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:26:12.073  [2024-12-09 17:15:34.924250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:26:12.073  [2024-12-09 17:15:34.924262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:26:12.073  [2024-12-09 17:15:34.924270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:26:12.073  [2024-12-09 17:15:34.924278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:26:12.073  [2024-12-09 17:15:34.924283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:26:12.073  [2024-12-09 17:15:34.924294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:26:12.073  [2024-12-09 17:15:34.924299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:26:12.073  [2024-12-09 17:15:34.924309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:26:12.073  [2024-12-09 17:15:34.924314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:26:12.073  [2024-12-09 17:15:34.924323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:26:12.074  [2024-12-09 17:15:34.924328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:26:12.074  [2024-12-09 17:15:34.924335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:26:12.074  [2024-12-09 17:15:34.924341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:26:12.074  [2024-12-09 17:15:34.924348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:26:12.074  [2024-12-09 17:15:34.924353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:26:12.074  [2024-12-09 17:15:34.924361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:26:12.074  [2024-12-09 17:15:34.924367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:26:12.074  [2024-12-09 17:15:34.924374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:26:12.074  [2024-12-09 17:15:34.924394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:26:12.074  [2024-12-09 17:15:34.924401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:26:12.074  [2024-12-09 17:15:34.924408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:12.074  [2024-12-09 17:15:34.924415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:26:12.074  [2024-12-09 17:15:34.924421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.607 ms
00:26:12.074  [2024-12-09 17:15:34.924429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:12.074  [2024-12-09 17:15:34.924473] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:26:12.074  [2024-12-09 17:15:34.924485] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:26:16.278  [2024-12-09 17:15:38.879595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.880051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:26:16.278  [2024-12-09 17:15:38.880087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3955.103 ms
00:26:16.278  [2024-12-09 17:15:38.880101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.918547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.918627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:26:16.278  [2024-12-09 17:15:38.918643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.175 ms
00:26:16.278  [2024-12-09 17:15:38.918655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.918814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.918830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:26:16.278  [2024-12-09 17:15:38.918840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.079 ms
00:26:16.278  [2024-12-09 17:15:38.918888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.959309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.959370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:26:16.278  [2024-12-09 17:15:38.959384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 40.378 ms
00:26:16.278  [2024-12-09 17:15:38.959396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.959438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.959454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:26:16.278  [2024-12-09 17:15:38.959465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:26:16.278  [2024-12-09 17:15:38.959486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.960260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.960303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:26:16.278  [2024-12-09 17:15:38.960317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.717 ms
00:26:16.278  [2024-12-09 17:15:38.960330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.960476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.960491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:26:16.278  [2024-12-09 17:15:38.960505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.118 ms
00:26:16.278  [2024-12-09 17:15:38.960521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:38.981135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:38.981188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:26:16.278  [2024-12-09 17:15:38.981200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.592 ms
00:26:16.278  [2024-12-09 17:15:38.981212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.010104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:26:16.278  [2024-12-09 17:15:39.015387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.015440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:26:16.278  [2024-12-09 17:15:39.015457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.056 ms
00:26:16.278  [2024-12-09 17:15:39.015466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.122068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.122134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:26:16.278  [2024-12-09 17:15:39.122154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 106.545 ms
00:26:16.278  [2024-12-09 17:15:39.122164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.122397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.122412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:26:16.278  [2024-12-09 17:15:39.122429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.176 ms
00:26:16.278  [2024-12-09 17:15:39.122438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.147933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.147982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:26:16.278  [2024-12-09 17:15:39.147999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.437 ms
00:26:16.278  [2024-12-09 17:15:39.148009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.172956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.173165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:26:16.278  [2024-12-09 17:15:39.173195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.888 ms
00:26:16.278  [2024-12-09 17:15:39.173204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.173892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.173918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:26:16.278  [2024-12-09 17:15:39.173932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.621 ms
00:26:16.278  [2024-12-09 17:15:39.173944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.264831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.264898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:26:16.278  [2024-12-09 17:15:39.264921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 90.831 ms
00:26:16.278  [2024-12-09 17:15:39.264931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.278  [2024-12-09 17:15:39.293933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.278  [2024-12-09 17:15:39.293991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:26:16.278  [2024-12-09 17:15:39.294010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.910 ms
00:26:16.278  [2024-12-09 17:15:39.294019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.540  [2024-12-09 17:15:39.322070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.540  [2024-12-09 17:15:39.322139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:26:16.540  [2024-12-09 17:15:39.322161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.987 ms
00:26:16.540  [2024-12-09 17:15:39.322173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.540  [2024-12-09 17:15:39.354213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.540  [2024-12-09 17:15:39.354276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:26:16.540  [2024-12-09 17:15:39.354294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.956 ms
00:26:16.540  [2024-12-09 17:15:39.354303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.540  [2024-12-09 17:15:39.354378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.540  [2024-12-09 17:15:39.354389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:26:16.540  [2024-12-09 17:15:39.354407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:26:16.540  [2024-12-09 17:15:39.354415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.540  [2024-12-09 17:15:39.354545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:16.540  [2024-12-09 17:15:39.354560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:26:16.540  [2024-12-09 17:15:39.354573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.043 ms
00:26:16.540  [2024-12-09 17:15:39.354581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:16.540  [2024-12-09 17:15:39.356433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4445.197 ms, result 0
00:26:16.540  {
00:26:16.540    "name": "ftl0",
00:26:16.540    "uuid": "733c0fa8-9e02-4a21-8416-115b8afc7a4a"
00:26:16.540  }
00:26:16.540   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": ['
00:26:16.540   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}'
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0
00:26:16.801  /dev/nbd0
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break
00:26:16.801   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct
00:26:17.062  1+0 records in
00:26:17.062  1+0 records out
00:26:17.062  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489815 s, 8.4 MB/s
00:26:17.062    17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0
00:26:17.062   17:15:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144
00:26:17.062  [2024-12-09 17:15:39.928650] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:26:17.063  [2024-12-09 17:15:39.928800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82228 ]
00:26:17.063  [2024-12-09 17:15:40.093440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:17.324  [2024-12-09 17:15:40.216943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:18.713  
[2024-12-09T17:15:42.699Z] Copying: 192/1024 [MB] (192 MBps)
[2024-12-09T17:15:43.644Z] Copying: 388/1024 [MB] (196 MBps)
[2024-12-09T17:15:44.588Z] Copying: 584/1024 [MB] (195 MBps)
[2024-12-09T17:15:45.547Z] Copying: 779/1024 [MB] (195 MBps)
[2024-12-09T17:15:45.809Z] Copying: 973/1024 [MB] (193 MBps)
[2024-12-09T17:15:46.753Z] Copying: 1024/1024 [MB] (average 194 MBps)
00:26:23.712  
00:26:23.712   17:15:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:26:25.629   17:15:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct
00:26:25.629  [2024-12-09 17:15:48.604085] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:26:25.629  [2024-12-09 17:15:48.604190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82321 ]
00:26:25.892  [2024-12-09 17:15:48.759985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:25.892  [2024-12-09 17:15:48.862834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:27.279  
[2024-12-09T17:15:51.266Z] Copying: 12/1024 [MB] (12 MBps)
[2024-12-09T17:15:52.210Z] Copying: 22420/1048576 [kB] (10032 kBps)
[2024-12-09T17:15:53.154Z] Copying: 31248/1048576 [kB] (8828 kBps)
[2024-12-09T17:15:54.098Z] Copying: 43/1024 [MB] (13 MBps)
[2024-12-09T17:15:55.483Z] Copying: 55/1024 [MB] (11 MBps)
[2024-12-09T17:15:56.426Z] Copying: 68/1024 [MB] (13 MBps)
[2024-12-09T17:15:57.370Z] Copying: 82/1024 [MB] (13 MBps)
[2024-12-09T17:15:58.314Z] Copying: 101/1024 [MB] (18 MBps)
[2024-12-09T17:15:59.254Z] Copying: 119/1024 [MB] (18 MBps)
[2024-12-09T17:16:00.197Z] Copying: 136/1024 [MB] (17 MBps)
[2024-12-09T17:16:01.158Z] Copying: 150/1024 [MB] (13 MBps)
[2024-12-09T17:16:02.104Z] Copying: 166/1024 [MB] (16 MBps)
[2024-12-09T17:16:03.490Z] Copying: 186/1024 [MB] (19 MBps)
[2024-12-09T17:16:04.434Z] Copying: 206/1024 [MB] (19 MBps)
[2024-12-09T17:16:05.379Z] Copying: 225/1024 [MB] (19 MBps)
[2024-12-09T17:16:06.321Z] Copying: 246/1024 [MB] (21 MBps)
[2024-12-09T17:16:07.265Z] Copying: 266/1024 [MB] (20 MBps)
[2024-12-09T17:16:08.203Z] Copying: 284/1024 [MB] (17 MBps)
[2024-12-09T17:16:09.146Z] Copying: 307/1024 [MB] (23 MBps)
[2024-12-09T17:16:10.088Z] Copying: 330/1024 [MB] (23 MBps)
[2024-12-09T17:16:11.472Z] Copying: 351/1024 [MB] (20 MBps)
[2024-12-09T17:16:12.414Z] Copying: 372/1024 [MB] (20 MBps)
[2024-12-09T17:16:13.357Z] Copying: 393/1024 [MB] (21 MBps)
[2024-12-09T17:16:14.300Z] Copying: 410/1024 [MB] (17 MBps)
[2024-12-09T17:16:15.245Z] Copying: 427/1024 [MB] (16 MBps)
[2024-12-09T17:16:16.189Z] Copying: 443/1024 [MB] (16 MBps)
[2024-12-09T17:16:17.169Z] Copying: 457/1024 [MB] (13 MBps)
[2024-12-09T17:16:18.102Z] Copying: 478/1024 [MB] (21 MBps)
[2024-12-09T17:16:19.474Z] Copying: 512/1024 [MB] (34 MBps)
[2024-12-09T17:16:20.408Z] Copying: 541/1024 [MB] (28 MBps)
[2024-12-09T17:16:21.342Z] Copying: 568/1024 [MB] (27 MBps)
[2024-12-09T17:16:22.276Z] Copying: 597/1024 [MB] (28 MBps)
[2024-12-09T17:16:23.210Z] Copying: 627/1024 [MB] (30 MBps)
[2024-12-09T17:16:24.144Z] Copying: 656/1024 [MB] (29 MBps)
[2024-12-09T17:16:25.516Z] Copying: 691/1024 [MB] (34 MBps)
[2024-12-09T17:16:26.091Z] Copying: 723/1024 [MB] (31 MBps)
[2024-12-09T17:16:27.464Z] Copying: 752/1024 [MB] (29 MBps)
[2024-12-09T17:16:28.396Z] Copying: 782/1024 [MB] (30 MBps)
[2024-12-09T17:16:29.329Z] Copying: 812/1024 [MB] (29 MBps)
[2024-12-09T17:16:30.262Z] Copying: 841/1024 [MB] (29 MBps)
[2024-12-09T17:16:31.195Z] Copying: 877/1024 [MB] (35 MBps)
[2024-12-09T17:16:32.129Z] Copying: 907/1024 [MB] (30 MBps)
[2024-12-09T17:16:33.569Z] Copying: 941/1024 [MB] (34 MBps)
[2024-12-09T17:16:34.135Z] Copying: 970/1024 [MB] (28 MBps)
[2024-12-09T17:16:35.068Z] Copying: 1001/1024 [MB] (30 MBps)
[2024-12-09T17:16:35.635Z] Copying: 1024/1024 [MB] (average 22 MBps)
00:27:12.594  
00:27:12.594   17:16:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0
00:27:12.594   17:16:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0
00:27:12.852   17:16:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:27:12.852  [2024-12-09 17:16:35.828732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.828774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:27:12.852  [2024-12-09 17:16:35.828787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:27:12.852  [2024-12-09 17:16:35.828795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.852  [2024-12-09 17:16:35.828816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:27:12.852  [2024-12-09 17:16:35.830946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.830973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:27:12.852  [2024-12-09 17:16:35.830984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.114 ms
00:27:12.852  [2024-12-09 17:16:35.830991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.852  [2024-12-09 17:16:35.832969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.832998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:27:12.852  [2024-12-09 17:16:35.833007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.955 ms
00:27:12.852  [2024-12-09 17:16:35.833014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.852  [2024-12-09 17:16:35.848247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.848277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:27:12.852  [2024-12-09 17:16:35.848288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.214 ms
00:27:12.852  [2024-12-09 17:16:35.848294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.852  [2024-12-09 17:16:35.853152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.853178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:27:12.852  [2024-12-09 17:16:35.853189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.829 ms
00:27:12.852  [2024-12-09 17:16:35.853196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.852  [2024-12-09 17:16:35.871520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.871549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:27:12.852  [2024-12-09 17:16:35.871559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.271 ms
00:27:12.852  [2024-12-09 17:16:35.871565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.852  [2024-12-09 17:16:35.884104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.852  [2024-12-09 17:16:35.884134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:27:12.853  [2024-12-09 17:16:35.884146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.504 ms
00:27:12.853  [2024-12-09 17:16:35.884153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.853  [2024-12-09 17:16:35.884260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.853  [2024-12-09 17:16:35.884268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:27:12.853  [2024-12-09 17:16:35.884276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.076 ms
00:27:12.853  [2024-12-09 17:16:35.884282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.112  [2024-12-09 17:16:35.902061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.112  [2024-12-09 17:16:35.902091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:27:13.112  [2024-12-09 17:16:35.902101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.764 ms
00:27:13.112  [2024-12-09 17:16:35.902106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.112  [2024-12-09 17:16:35.919740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.112  [2024-12-09 17:16:35.919767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:27:13.112  [2024-12-09 17:16:35.919777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.603 ms
00:27:13.112  [2024-12-09 17:16:35.919782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.112  [2024-12-09 17:16:35.937058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.112  [2024-12-09 17:16:35.937083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:27:13.112  [2024-12-09 17:16:35.937093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.244 ms
00:27:13.112  [2024-12-09 17:16:35.937098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.112  [2024-12-09 17:16:35.954128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.112  [2024-12-09 17:16:35.954154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:27:13.112  [2024-12-09 17:16:35.954163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.971 ms
00:27:13.112  [2024-12-09 17:16:35.954168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.112  [2024-12-09 17:16:35.954197] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:27:13.112  [2024-12-09 17:16:35.954209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.112  [2024-12-09 17:16:35.954218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.112  [2024-12-09 17:16:35.954225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.112  [2024-12-09 17:16:35.954232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.112  [2024-12-09 17:16:35.954238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.113  [2024-12-09 17:16:35.954858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.114  [2024-12-09 17:16:35.954864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.114  [2024-12-09 17:16:35.954871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.114  [2024-12-09 17:16:35.954877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.114  [2024-12-09 17:16:35.954885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:27:13.114  [2024-12-09 17:16:35.954897] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:27:13.114  [2024-12-09 17:16:35.954905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         733c0fa8-9e02-4a21-8416-115b8afc7a4a
00:27:13.114  [2024-12-09 17:16:35.954911] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:27:13.114  [2024-12-09 17:16:35.954920] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:27:13.114  [2024-12-09 17:16:35.954928] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:27:13.114  [2024-12-09 17:16:35.954935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:27:13.114  [2024-12-09 17:16:35.954941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:27:13.114  [2024-12-09 17:16:35.954948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:27:13.114  [2024-12-09 17:16:35.954954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:27:13.114  [2024-12-09 17:16:35.954960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:27:13.114  [2024-12-09 17:16:35.954965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:27:13.114  [2024-12-09 17:16:35.954972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.114  [2024-12-09 17:16:35.954978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:27:13.114  [2024-12-09 17:16:35.954986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.777 ms
00:27:13.114  [2024-12-09 17:16:35.954991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:35.964906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.114  [2024-12-09 17:16:35.964931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:27:13.114  [2024-12-09 17:16:35.964940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.889 ms
00:27:13.114  [2024-12-09 17:16:35.964947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:35.965236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.114  [2024-12-09 17:16:35.965251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:27:13.114  [2024-12-09 17:16:35.965259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.271 ms
00:27:13.114  [2024-12-09 17:16:35.965264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:35.999817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:35.999854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:13.114  [2024-12-09 17:16:35.999864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:35.999871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:35.999930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:35.999937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:13.114  [2024-12-09 17:16:35.999945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:35.999950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.000038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.000049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:13.114  [2024-12-09 17:16:36.000056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.000062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.000079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.000085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:13.114  [2024-12-09 17:16:36.000092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.000098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.062093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.062132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:13.114  [2024-12-09 17:16:36.062143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.062150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.112591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.112631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:13.114  [2024-12-09 17:16:36.112641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.112647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.112719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.112727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:13.114  [2024-12-09 17:16:36.112738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.112744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.112801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.112809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:13.114  [2024-12-09 17:16:36.112817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.112823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.112910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.112918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:13.114  [2024-12-09 17:16:36.112926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.112934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.112966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.112974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:27:13.114  [2024-12-09 17:16:36.112982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.112988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.113022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.113030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:13.114  [2024-12-09 17:16:36.113037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.113045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.113086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:13.114  [2024-12-09 17:16:36.113094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:13.114  [2024-12-09 17:16:36.113102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:13.114  [2024-12-09 17:16:36.113108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.114  [2024-12-09 17:16:36.113226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 284.458 ms, result 0
00:27:13.114  true
00:27:13.114   17:16:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82082
00:27:13.114   17:16:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82082
00:27:13.114   17:16:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144
00:27:13.372  [2024-12-09 17:16:36.203500] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:27:13.372  [2024-12-09 17:16:36.203626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82811 ]
00:27:13.373  [2024-12-09 17:16:36.362747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:13.631  [2024-12-09 17:16:36.450352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:15.005  
[2024-12-09T17:16:38.979Z] Copying: 252/1024 [MB] (252 MBps)
[2024-12-09T17:16:39.914Z] Copying: 508/1024 [MB] (255 MBps)
[2024-12-09T17:16:40.848Z] Copying: 762/1024 [MB] (254 MBps)
[2024-12-09T17:16:40.848Z] Copying: 1016/1024 [MB] (253 MBps)
[2024-12-09T17:16:41.413Z] Copying: 1024/1024 [MB] (average 254 MBps)
00:27:18.372  
00:27:18.372  /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82082 Killed                  "$SPDK_BIN_DIR/spdk_tgt" -m 0x1
00:27:18.372   17:16:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:27:18.372  [2024-12-09 17:16:41.363214] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:27:18.372  [2024-12-09 17:16:41.363346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82865 ]
00:27:18.630  [2024-12-09 17:16:41.521770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:18.630  [2024-12-09 17:16:41.610227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:18.888  [2024-12-09 17:16:41.842820] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:27:18.888  [2024-12-09 17:16:41.842886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:27:18.888  [2024-12-09 17:16:41.905948] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:27:18.888  [2024-12-09 17:16:41.906235] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:27:18.888  [2024-12-09 17:16:41.906435] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:27:19.147  [2024-12-09 17:16:42.089258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.089291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:27:19.147  [2024-12-09 17:16:42.089302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:27:19.147  [2024-12-09 17:16:42.089311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.089346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.089354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:19.147  [2024-12-09 17:16:42.089360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.022 ms
00:27:19.147  [2024-12-09 17:16:42.089367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.089381] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:27:19.147  [2024-12-09 17:16:42.089898] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:27:19.147  [2024-12-09 17:16:42.089916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.089923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:19.147  [2024-12-09 17:16:42.089930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.540 ms
00:27:19.147  [2024-12-09 17:16:42.089936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.091154] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:27:19.147  [2024-12-09 17:16:42.101173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.101202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:27:19.147  [2024-12-09 17:16:42.101212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.020 ms
00:27:19.147  [2024-12-09 17:16:42.101219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.101266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.101274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:27:19.147  [2024-12-09 17:16:42.101280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.020 ms
00:27:19.147  [2024-12-09 17:16:42.101286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.107390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.107414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:19.147  [2024-12-09 17:16:42.107422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.060 ms
00:27:19.147  [2024-12-09 17:16:42.107429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.107486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.107493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:19.147  [2024-12-09 17:16:42.107500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.044 ms
00:27:19.147  [2024-12-09 17:16:42.107506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.107541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.107549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:27:19.147  [2024-12-09 17:16:42.107555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:27:19.147  [2024-12-09 17:16:42.107561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.107576] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:27:19.147  [2024-12-09 17:16:42.110506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.110528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:19.147  [2024-12-09 17:16:42.110536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.935 ms
00:27:19.147  [2024-12-09 17:16:42.110542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.110572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.110579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:27:19.147  [2024-12-09 17:16:42.110585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:27:19.147  [2024-12-09 17:16:42.110591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.110609] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:27:19.147  [2024-12-09 17:16:42.110626] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:27:19.147  [2024-12-09 17:16:42.110653] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:27:19.147  [2024-12-09 17:16:42.110665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:27:19.147  [2024-12-09 17:16:42.110747] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:27:19.147  [2024-12-09 17:16:42.110755] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:27:19.147  [2024-12-09 17:16:42.110763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:27:19.147  [2024-12-09 17:16:42.110772] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:27:19.147  [2024-12-09 17:16:42.110779] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:27:19.147  [2024-12-09 17:16:42.110785] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:27:19.147  [2024-12-09 17:16:42.110791] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:27:19.147  [2024-12-09 17:16:42.110796] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:27:19.147  [2024-12-09 17:16:42.110802] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:27:19.147  [2024-12-09 17:16:42.110808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.110814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:27:19.147  [2024-12-09 17:16:42.110820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.202 ms
00:27:19.147  [2024-12-09 17:16:42.110826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.110899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.147  [2024-12-09 17:16:42.110908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:27:19.147  [2024-12-09 17:16:42.110914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.063 ms
00:27:19.147  [2024-12-09 17:16:42.110920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.147  [2024-12-09 17:16:42.110998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:27:19.147  [2024-12-09 17:16:42.111006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:27:19.147  [2024-12-09 17:16:42.111012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:27:19.147  [2024-12-09 17:16:42.111018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.147  [2024-12-09 17:16:42.111024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:27:19.147  [2024-12-09 17:16:42.111030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:27:19.147  [2024-12-09 17:16:42.111035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:27:19.147  [2024-12-09 17:16:42.111042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:27:19.147  [2024-12-09 17:16:42.111047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:27:19.147  [2024-12-09 17:16:42.111058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:27:19.147  [2024-12-09 17:16:42.111063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:27:19.147  [2024-12-09 17:16:42.111068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:27:19.147  [2024-12-09 17:16:42.111073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:27:19.147  [2024-12-09 17:16:42.111078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:27:19.148  [2024-12-09 17:16:42.111083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:27:19.148  [2024-12-09 17:16:42.111087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:27:19.148  [2024-12-09 17:16:42.111097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:27:19.148  [2024-12-09 17:16:42.111113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:27:19.148  [2024-12-09 17:16:42.111127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:27:19.148  [2024-12-09 17:16:42.111142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:27:19.148  [2024-12-09 17:16:42.111159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:27:19.148  [2024-12-09 17:16:42.111174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:27:19.148  [2024-12-09 17:16:42.111184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:27:19.148  [2024-12-09 17:16:42.111189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:27:19.148  [2024-12-09 17:16:42.111195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:27:19.148  [2024-12-09 17:16:42.111200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:27:19.148  [2024-12-09 17:16:42.111205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:27:19.148  [2024-12-09 17:16:42.111210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:27:19.148  [2024-12-09 17:16:42.111220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:27:19.148  [2024-12-09 17:16:42.111225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111230] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:27:19.148  [2024-12-09 17:16:42.111236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:27:19.148  [2024-12-09 17:16:42.111243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.148  [2024-12-09 17:16:42.111254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:27:19.148  [2024-12-09 17:16:42.111259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:27:19.148  [2024-12-09 17:16:42.111264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:27:19.148  [2024-12-09 17:16:42.111269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:27:19.148  [2024-12-09 17:16:42.111274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:27:19.148  [2024-12-09 17:16:42.111279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:27:19.148  [2024-12-09 17:16:42.111286] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:27:19.148  [2024-12-09 17:16:42.111292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:27:19.148  [2024-12-09 17:16:42.111304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:27:19.148  [2024-12-09 17:16:42.111310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:27:19.148  [2024-12-09 17:16:42.111315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:27:19.148  [2024-12-09 17:16:42.111320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:27:19.148  [2024-12-09 17:16:42.111327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:27:19.148  [2024-12-09 17:16:42.111333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:27:19.148  [2024-12-09 17:16:42.111338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:27:19.148  [2024-12-09 17:16:42.111344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:27:19.148  [2024-12-09 17:16:42.111350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:27:19.148  [2024-12-09 17:16:42.111377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:27:19.148  [2024-12-09 17:16:42.111383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:27:19.148  [2024-12-09 17:16:42.111396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:27:19.148  [2024-12-09 17:16:42.111401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:27:19.148  [2024-12-09 17:16:42.111407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:27:19.148  [2024-12-09 17:16:42.111412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.111418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:27:19.148  [2024-12-09 17:16:42.111424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.467 ms
00:27:19.148  [2024-12-09 17:16:42.111430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.148  [2024-12-09 17:16:42.135622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.135653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:19.148  [2024-12-09 17:16:42.135662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.142 ms
00:27:19.148  [2024-12-09 17:16:42.135670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.148  [2024-12-09 17:16:42.135746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.135754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:27:19.148  [2024-12-09 17:16:42.135761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:27:19.148  [2024-12-09 17:16:42.135767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.148  [2024-12-09 17:16:42.183552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.183594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:19.148  [2024-12-09 17:16:42.183607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.739 ms
00:27:19.148  [2024-12-09 17:16:42.183614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.148  [2024-12-09 17:16:42.183660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.183668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:19.148  [2024-12-09 17:16:42.183675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:27:19.148  [2024-12-09 17:16:42.183681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.148  [2024-12-09 17:16:42.184125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.184140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:19.148  [2024-12-09 17:16:42.184149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.380 ms
00:27:19.148  [2024-12-09 17:16:42.184160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.148  [2024-12-09 17:16:42.184271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.148  [2024-12-09 17:16:42.184278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:19.148  [2024-12-09 17:16:42.184284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.093 ms
00:27:19.148  [2024-12-09 17:16:42.184290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.196030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.196051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:19.407  [2024-12-09 17:16:42.196059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.722 ms
00:27:19.407  [2024-12-09 17:16:42.196066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.206271] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:27:19.407  [2024-12-09 17:16:42.206296] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:27:19.407  [2024-12-09 17:16:42.206306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.206314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:27:19.407  [2024-12-09 17:16:42.206321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.155 ms
00:27:19.407  [2024-12-09 17:16:42.206327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.224992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.225077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:27:19.407  [2024-12-09 17:16:42.225088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.629 ms
00:27:19.407  [2024-12-09 17:16:42.225095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.233892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.233919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:27:19.407  [2024-12-09 17:16:42.233928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.753 ms
00:27:19.407  [2024-12-09 17:16:42.233934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.242356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.242381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:27:19.407  [2024-12-09 17:16:42.242388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.394 ms
00:27:19.407  [2024-12-09 17:16:42.242395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.242895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.242916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:27:19.407  [2024-12-09 17:16:42.242923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.440 ms
00:27:19.407  [2024-12-09 17:16:42.242930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.289841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.289892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:27:19.407  [2024-12-09 17:16:42.289905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 46.894 ms
00:27:19.407  [2024-12-09 17:16:42.289912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.298359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:27:19.407  [2024-12-09 17:16:42.300755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.300777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:27:19.407  [2024-12-09 17:16:42.300786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.804 ms
00:27:19.407  [2024-12-09 17:16:42.300797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.300892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.300901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:27:19.407  [2024-12-09 17:16:42.300909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:27:19.407  [2024-12-09 17:16:42.300915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.300979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.300987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:27:19.407  [2024-12-09 17:16:42.300994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:27:19.407  [2024-12-09 17:16:42.301001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.301020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.301027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:27:19.407  [2024-12-09 17:16:42.301033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:27:19.407  [2024-12-09 17:16:42.301039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.301069] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:27:19.407  [2024-12-09 17:16:42.301078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.301084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:27:19.407  [2024-12-09 17:16:42.301090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:27:19.407  [2024-12-09 17:16:42.301099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.319179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.319206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:27:19.407  [2024-12-09 17:16:42.319216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.066 ms
00:27:19.407  [2024-12-09 17:16:42.319224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.319284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.407  [2024-12-09 17:16:42.319292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:27:19.407  [2024-12-09 17:16:42.319299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:27:19.407  [2024-12-09 17:16:42.319305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.407  [2024-12-09 17:16:42.320299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.655 ms, result 0
00:27:20.340  
[2024-12-09T17:16:44.754Z] Copying: 47/1024 [MB] (47 MBps)
[2024-12-09T17:16:45.687Z] Copying: 87/1024 [MB] (40 MBps)
[2024-12-09T17:16:46.620Z] Copying: 119/1024 [MB] (31 MBps)
[2024-12-09T17:16:47.554Z] Copying: 142/1024 [MB] (23 MBps)
[2024-12-09T17:16:48.490Z] Copying: 177/1024 [MB] (34 MBps)
[2024-12-09T17:16:49.426Z] Copying: 203/1024 [MB] (26 MBps)
[2024-12-09T17:16:50.360Z] Copying: 232/1024 [MB] (29 MBps)
[2024-12-09T17:16:51.733Z] Copying: 263/1024 [MB] (30 MBps)
[2024-12-09T17:16:52.666Z] Copying: 282/1024 [MB] (18 MBps)
[2024-12-09T17:16:53.598Z] Copying: 306/1024 [MB] (24 MBps)
[2024-12-09T17:16:54.531Z] Copying: 335/1024 [MB] (29 MBps)
[2024-12-09T17:16:55.462Z] Copying: 372/1024 [MB] (36 MBps)
[2024-12-09T17:16:56.397Z] Copying: 403/1024 [MB] (31 MBps)
[2024-12-09T17:16:57.771Z] Copying: 432/1024 [MB] (28 MBps)
[2024-12-09T17:16:58.337Z] Copying: 461/1024 [MB] (29 MBps)
[2024-12-09T17:16:59.710Z] Copying: 492/1024 [MB] (30 MBps)
[2024-12-09T17:17:00.644Z] Copying: 525/1024 [MB] (33 MBps)
[2024-12-09T17:17:01.579Z] Copying: 555/1024 [MB] (29 MBps)
[2024-12-09T17:17:02.518Z] Copying: 583/1024 [MB] (28 MBps)
[2024-12-09T17:17:03.462Z] Copying: 606/1024 [MB] (23 MBps)
[2024-12-09T17:17:04.444Z] Copying: 617/1024 [MB] (10 MBps)
[2024-12-09T17:17:05.388Z] Copying: 629/1024 [MB] (12 MBps)
[2024-12-09T17:17:06.772Z] Copying: 643/1024 [MB] (14 MBps)
[2024-12-09T17:17:07.344Z] Copying: 661/1024 [MB] (17 MBps)
[2024-12-09T17:17:08.727Z] Copying: 681/1024 [MB] (19 MBps)
[2024-12-09T17:17:09.669Z] Copying: 692/1024 [MB] (11 MBps)
[2024-12-09T17:17:10.615Z] Copying: 708/1024 [MB] (15 MBps)
[2024-12-09T17:17:11.557Z] Copying: 720/1024 [MB] (12 MBps)
[2024-12-09T17:17:12.500Z] Copying: 735/1024 [MB] (15 MBps)
[2024-12-09T17:17:13.443Z] Copying: 753/1024 [MB] (17 MBps)
[2024-12-09T17:17:14.389Z] Copying: 771/1024 [MB] (17 MBps)
[2024-12-09T17:17:15.774Z] Copying: 783/1024 [MB] (12 MBps)
[2024-12-09T17:17:16.344Z] Copying: 795/1024 [MB] (11 MBps)
[2024-12-09T17:17:17.726Z] Copying: 808/1024 [MB] (12 MBps)
[2024-12-09T17:17:18.673Z] Copying: 823/1024 [MB] (15 MBps)
[2024-12-09T17:17:19.643Z] Copying: 833/1024 [MB] (10 MBps)
[2024-12-09T17:17:20.617Z] Copying: 850/1024 [MB] (17 MBps)
[2024-12-09T17:17:21.562Z] Copying: 862/1024 [MB] (12 MBps)
[2024-12-09T17:17:22.507Z] Copying: 892152/1048576 [kB] (8996 kBps)
[2024-12-09T17:17:23.452Z] Copying: 901488/1048576 [kB] (9336 kBps)
[2024-12-09T17:17:24.395Z] Copying: 892/1024 [MB] (11 MBps)
[2024-12-09T17:17:25.338Z] Copying: 903/1024 [MB] (11 MBps)
[2024-12-09T17:17:26.769Z] Copying: 914/1024 [MB] (10 MBps)
[2024-12-09T17:17:27.341Z] Copying: 925/1024 [MB] (11 MBps)
[2024-12-09T17:17:28.730Z] Copying: 938/1024 [MB] (12 MBps)
[2024-12-09T17:17:29.673Z] Copying: 955/1024 [MB] (17 MBps)
[2024-12-09T17:17:30.618Z] Copying: 967/1024 [MB] (11 MBps)
[2024-12-09T17:17:31.561Z] Copying: 977/1024 [MB] (10 MBps)
[2024-12-09T17:17:32.506Z] Copying: 988/1024 [MB] (10 MBps)
[2024-12-09T17:17:33.450Z] Copying: 1022140/1048576 [kB] (9800 kBps)
[2024-12-09T17:17:34.395Z] Copying: 1014/1024 [MB] (15 MBps)
[2024-12-09T17:17:34.656Z] Copying: 1048256/1048576 [kB] (9892 kBps)
[2024-12-09T17:17:34.917Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-09 17:17:34.656712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.657017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:28:11.876  [2024-12-09 17:17:34.657153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:28:11.876  [2024-12-09 17:17:34.657181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:11.876  [2024-12-09 17:17:34.661250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:28:11.876  [2024-12-09 17:17:34.664961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.665079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:28:11.876  [2024-12-09 17:17:34.665136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.578 ms
00:28:11.876  [2024-12-09 17:17:34.665168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:11.876  [2024-12-09 17:17:34.676225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.676344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:28:11.876  [2024-12-09 17:17:34.676411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.929 ms
00:28:11.876  [2024-12-09 17:17:34.676434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:11.876  [2024-12-09 17:17:34.698493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.698532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:28:11.876  [2024-12-09 17:17:34.698544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.037 ms
00:28:11.876  [2024-12-09 17:17:34.698553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:11.876  [2024-12-09 17:17:34.704757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.704789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:28:11.876  [2024-12-09 17:17:34.704800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.169 ms
00:28:11.876  [2024-12-09 17:17:34.704808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:11.876  [2024-12-09 17:17:34.730726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.730766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:28:11.876  [2024-12-09 17:17:34.730777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.830 ms
00:28:11.876  [2024-12-09 17:17:34.730785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:11.876  [2024-12-09 17:17:34.746482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:11.876  [2024-12-09 17:17:34.746519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:28:11.876  [2024-12-09 17:17:34.746532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.660 ms
00:28:11.876  [2024-12-09 17:17:34.746541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.142  [2024-12-09 17:17:35.036494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.142  [2024-12-09 17:17:35.036553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:28:12.142  [2024-12-09 17:17:35.036574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 289.912 ms
00:28:12.142  [2024-12-09 17:17:35.036583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.142  [2024-12-09 17:17:35.062342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.142  [2024-12-09 17:17:35.062389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:28:12.142  [2024-12-09 17:17:35.062401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.742 ms
00:28:12.142  [2024-12-09 17:17:35.062424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.142  [2024-12-09 17:17:35.088429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.142  [2024-12-09 17:17:35.088474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:28:12.142  [2024-12-09 17:17:35.088486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.957 ms
00:28:12.142  [2024-12-09 17:17:35.088495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.142  [2024-12-09 17:17:35.113709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.142  [2024-12-09 17:17:35.113756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:28:12.142  [2024-12-09 17:17:35.113768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.169 ms
00:28:12.142  [2024-12-09 17:17:35.113775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.142  [2024-12-09 17:17:35.138861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.142  [2024-12-09 17:17:35.138909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:28:12.142  [2024-12-09 17:17:35.138921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.990 ms
00:28:12.142  [2024-12-09 17:17:35.138929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.142  [2024-12-09 17:17:35.138973] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:28:12.142  [2024-12-09 17:17:35.138991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:    94464 / 261120 	wr_cnt: 1	state: open
00:28:12.142  [2024-12-09 17:17:35.139003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.142  [2024-12-09 17:17:35.139774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.139997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:28:12.143  [2024-12-09 17:17:35.140168] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:28:12.143  [2024-12-09 17:17:35.140176] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         733c0fa8-9e02-4a21-8416-115b8afc7a4a
00:28:12.143  [2024-12-09 17:17:35.140198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    94464
00:28:12.143  [2024-12-09 17:17:35.140207] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        95424
00:28:12.143  [2024-12-09 17:17:35.140215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         94464
00:28:12.143  [2024-12-09 17:17:35.140225] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0102
00:28:12.143  [2024-12-09 17:17:35.140233] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:28:12.143  [2024-12-09 17:17:35.140242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:28:12.143  [2024-12-09 17:17:35.140251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:28:12.143  [2024-12-09 17:17:35.140259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:28:12.143  [2024-12-09 17:17:35.140266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:28:12.143  [2024-12-09 17:17:35.140274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.143  [2024-12-09 17:17:35.140283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:28:12.143  [2024-12-09 17:17:35.140293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.302 ms
00:28:12.143  [2024-12-09 17:17:35.140301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.143  [2024-12-09 17:17:35.154889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.143  [2024-12-09 17:17:35.154928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:28:12.143  [2024-12-09 17:17:35.154941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.567 ms
00:28:12.143  [2024-12-09 17:17:35.154950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.143  [2024-12-09 17:17:35.155381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:12.143  [2024-12-09 17:17:35.155455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:28:12.143  [2024-12-09 17:17:35.155473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.393 ms
00:28:12.143  [2024-12-09 17:17:35.155481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.460  [2024-12-09 17:17:35.194663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.460  [2024-12-09 17:17:35.194717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:28:12.460  [2024-12-09 17:17:35.194730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.460  [2024-12-09 17:17:35.194740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.460  [2024-12-09 17:17:35.194814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.460  [2024-12-09 17:17:35.194823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:28:12.460  [2024-12-09 17:17:35.194838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.460  [2024-12-09 17:17:35.194866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.194936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.194949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:28:12.461  [2024-12-09 17:17:35.194958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.194967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.194984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.194993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:28:12.461  [2024-12-09 17:17:35.195002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.195010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.286516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.286601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:28:12.461  [2024-12-09 17:17:35.286618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.286629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.360276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.360356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:28:12.461  [2024-12-09 17:17:35.360385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.360403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.360535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.360549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:28:12.461  [2024-12-09 17:17:35.360559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.360569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.360615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.360625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:28:12.461  [2024-12-09 17:17:35.360635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.360644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.360760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.360772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:28:12.461  [2024-12-09 17:17:35.360782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.360791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.360826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.360836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:28:12.461  [2024-12-09 17:17:35.360874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.360884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.360945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.360959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:28:12.461  [2024-12-09 17:17:35.360968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.360977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.361038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:12.461  [2024-12-09 17:17:35.361051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:28:12.461  [2024-12-09 17:17:35.361061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:12.461  [2024-12-09 17:17:35.361070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:12.461  [2024-12-09 17:17:35.361240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 705.966 ms, result 0
00:28:13.851  
00:28:13.851  
00:28:13.851   17:17:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2
00:28:15.764   17:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:28:15.764  [2024-12-09 17:17:38.766160] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:28:15.764  [2024-12-09 17:17:38.766262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83448 ]
00:28:16.024  [2024-12-09 17:17:38.923660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:16.024  [2024-12-09 17:17:39.034916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:16.599  [2024-12-09 17:17:39.344423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:28:16.599  [2024-12-09 17:17:39.344521] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:28:16.599  [2024-12-09 17:17:39.510890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.510961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:28:16.599  [2024-12-09 17:17:39.510979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:28:16.599  [2024-12-09 17:17:39.510988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.511044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.511058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:28:16.599  [2024-12-09 17:17:39.511068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.036 ms
00:28:16.599  [2024-12-09 17:17:39.511077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.511100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:28:16.599  [2024-12-09 17:17:39.511840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:28:16.599  [2024-12-09 17:17:39.511886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.511894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:28:16.599  [2024-12-09 17:17:39.511905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.791 ms
00:28:16.599  [2024-12-09 17:17:39.511914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.514181] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:28:16.599  [2024-12-09 17:17:39.529348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.529400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:28:16.599  [2024-12-09 17:17:39.529415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.169 ms
00:28:16.599  [2024-12-09 17:17:39.529425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.529514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.529526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:28:16.599  [2024-12-09 17:17:39.529535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:28:16.599  [2024-12-09 17:17:39.529543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.541074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.541116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:28:16.599  [2024-12-09 17:17:39.541129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.451 ms
00:28:16.599  [2024-12-09 17:17:39.541145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.541235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.541245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:28:16.599  [2024-12-09 17:17:39.541256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.068 ms
00:28:16.599  [2024-12-09 17:17:39.541265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.541323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.541335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:28:16.599  [2024-12-09 17:17:39.541344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:28:16.599  [2024-12-09 17:17:39.541352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.541380] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:28:16.599  [2024-12-09 17:17:39.545995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.546037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:28:16.599  [2024-12-09 17:17:39.546052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.620 ms
00:28:16.599  [2024-12-09 17:17:39.546060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.546104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.546114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:28:16.599  [2024-12-09 17:17:39.546124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:28:16.599  [2024-12-09 17:17:39.546132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.546171] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:28:16.599  [2024-12-09 17:17:39.546199] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:28:16.599  [2024-12-09 17:17:39.546239] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:28:16.599  [2024-12-09 17:17:39.546260] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:28:16.599  [2024-12-09 17:17:39.546375] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:28:16.599  [2024-12-09 17:17:39.546387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:28:16.599  [2024-12-09 17:17:39.546400] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:28:16.599  [2024-12-09 17:17:39.546411] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546421] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546430] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:28:16.599  [2024-12-09 17:17:39.546439] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:28:16.599  [2024-12-09 17:17:39.546450] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:28:16.599  [2024-12-09 17:17:39.546459] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:28:16.599  [2024-12-09 17:17:39.546468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.546477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:28:16.599  [2024-12-09 17:17:39.546485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.300 ms
00:28:16.599  [2024-12-09 17:17:39.546492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.546577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.599  [2024-12-09 17:17:39.546585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:28:16.599  [2024-12-09 17:17:39.546593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:28:16.599  [2024-12-09 17:17:39.546600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.599  [2024-12-09 17:17:39.546712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:28:16.599  [2024-12-09 17:17:39.546724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:28:16.599  [2024-12-09 17:17:39.546733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:28:16.599  [2024-12-09 17:17:39.546756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:28:16.599  [2024-12-09 17:17:39.546779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:28:16.599  [2024-12-09 17:17:39.546794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:28:16.599  [2024-12-09 17:17:39.546802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:28:16.599  [2024-12-09 17:17:39.546809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:28:16.599  [2024-12-09 17:17:39.546824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:28:16.599  [2024-12-09 17:17:39.546836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:28:16.599  [2024-12-09 17:17:39.546844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:28:16.599  [2024-12-09 17:17:39.546893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:28:16.599  [2024-12-09 17:17:39.546919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:28:16.599  [2024-12-09 17:17:39.546944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:28:16.599  [2024-12-09 17:17:39.546965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:16.599  [2024-12-09 17:17:39.546980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:28:16.599  [2024-12-09 17:17:39.546987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:28:16.599  [2024-12-09 17:17:39.546995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:16.599  [2024-12-09 17:17:39.547002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:28:16.599  [2024-12-09 17:17:39.547009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:28:16.599  [2024-12-09 17:17:39.547015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:28:16.599  [2024-12-09 17:17:39.547022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:28:16.599  [2024-12-09 17:17:39.547029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:28:16.600  [2024-12-09 17:17:39.547037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:28:16.600  [2024-12-09 17:17:39.547044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:28:16.600  [2024-12-09 17:17:39.547051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:28:16.600  [2024-12-09 17:17:39.547057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:16.600  [2024-12-09 17:17:39.547064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:28:16.600  [2024-12-09 17:17:39.547070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:28:16.600  [2024-12-09 17:17:39.547076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:16.600  [2024-12-09 17:17:39.547084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:28:16.600  [2024-12-09 17:17:39.547093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:28:16.600  [2024-12-09 17:17:39.547101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:28:16.600  [2024-12-09 17:17:39.547111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:16.600  [2024-12-09 17:17:39.547120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:28:16.600  [2024-12-09 17:17:39.547128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:28:16.600  [2024-12-09 17:17:39.547136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:28:16.600  [2024-12-09 17:17:39.547144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:28:16.600  [2024-12-09 17:17:39.547151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:28:16.600  [2024-12-09 17:17:39.547159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:28:16.600  [2024-12-09 17:17:39.547168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:28:16.600  [2024-12-09 17:17:39.547179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:28:16.600  [2024-12-09 17:17:39.547200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:28:16.600  [2024-12-09 17:17:39.547207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:28:16.600  [2024-12-09 17:17:39.547215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:28:16.600  [2024-12-09 17:17:39.547223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:28:16.600  [2024-12-09 17:17:39.547230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:28:16.600  [2024-12-09 17:17:39.547237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:28:16.600  [2024-12-09 17:17:39.547245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:28:16.600  [2024-12-09 17:17:39.547252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:28:16.600  [2024-12-09 17:17:39.547259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:28:16.600  [2024-12-09 17:17:39.547303] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:28:16.600  [2024-12-09 17:17:39.547312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:28:16.600  [2024-12-09 17:17:39.547330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:28:16.600  [2024-12-09 17:17:39.547338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:28:16.600  [2024-12-09 17:17:39.547346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:28:16.600  [2024-12-09 17:17:39.547354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.600  [2024-12-09 17:17:39.547363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:28:16.600  [2024-12-09 17:17:39.547372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.715 ms
00:28:16.600  [2024-12-09 17:17:39.547382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.600  [2024-12-09 17:17:39.585670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.600  [2024-12-09 17:17:39.585718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:28:16.600  [2024-12-09 17:17:39.585732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.238 ms
00:28:16.600  [2024-12-09 17:17:39.585747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.600  [2024-12-09 17:17:39.585865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.600  [2024-12-09 17:17:39.585877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:28:16.600  [2024-12-09 17:17:39.585887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.088 ms
00:28:16.600  [2024-12-09 17:17:39.585897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.638035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.638090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:28:16.863  [2024-12-09 17:17:39.638105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 52.068 ms
00:28:16.863  [2024-12-09 17:17:39.638116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.638176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.638188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:28:16.863  [2024-12-09 17:17:39.638202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:28:16.863  [2024-12-09 17:17:39.638211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.638998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.639037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:28:16.863  [2024-12-09 17:17:39.639049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.696 ms
00:28:16.863  [2024-12-09 17:17:39.639060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.639242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.639260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:28:16.863  [2024-12-09 17:17:39.639277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.148 ms
00:28:16.863  [2024-12-09 17:17:39.639286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.657212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.657262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:28:16.863  [2024-12-09 17:17:39.657274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.902 ms
00:28:16.863  [2024-12-09 17:17:39.657283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.672796] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0
00:28:16.863  [2024-12-09 17:17:39.672853] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:28:16.863  [2024-12-09 17:17:39.672868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.672878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:28:16.863  [2024-12-09 17:17:39.672888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.459 ms
00:28:16.863  [2024-12-09 17:17:39.672897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.699206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.699253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:28:16.863  [2024-12-09 17:17:39.699266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.244 ms
00:28:16.863  [2024-12-09 17:17:39.699276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.712447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.712496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:28:16.863  [2024-12-09 17:17:39.712509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.112 ms
00:28:16.863  [2024-12-09 17:17:39.712517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.725327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.725369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:28:16.863  [2024-12-09 17:17:39.725382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.760 ms
00:28:16.863  [2024-12-09 17:17:39.725390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.726080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.726113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:28:16.863  [2024-12-09 17:17:39.726128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.576 ms
00:28:16.863  [2024-12-09 17:17:39.726138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.799419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.799482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:28:16.863  [2024-12-09 17:17:39.799508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 73.259 ms
00:28:16.863  [2024-12-09 17:17:39.799519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.863  [2024-12-09 17:17:39.812301] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:28:16.863  [2024-12-09 17:17:39.816220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.863  [2024-12-09 17:17:39.816263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:28:16.863  [2024-12-09 17:17:39.816277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.640 ms
00:28:16.864  [2024-12-09 17:17:39.816286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.816404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.864  [2024-12-09 17:17:39.816418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:28:16.864  [2024-12-09 17:17:39.816433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:28:16.864  [2024-12-09 17:17:39.816441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.818554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.864  [2024-12-09 17:17:39.818605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:28:16.864  [2024-12-09 17:17:39.818617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.068 ms
00:28:16.864  [2024-12-09 17:17:39.818627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.818668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.864  [2024-12-09 17:17:39.818679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:28:16.864  [2024-12-09 17:17:39.818689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:28:16.864  [2024-12-09 17:17:39.818698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.818749] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:28:16.864  [2024-12-09 17:17:39.818762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.864  [2024-12-09 17:17:39.818771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:28:16.864  [2024-12-09 17:17:39.818781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:28:16.864  [2024-12-09 17:17:39.818792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.845459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.864  [2024-12-09 17:17:39.845513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:28:16.864  [2024-12-09 17:17:39.845534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.645 ms
00:28:16.864  [2024-12-09 17:17:39.845544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.845636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:16.864  [2024-12-09 17:17:39.845647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:28:16.864  [2024-12-09 17:17:39.845657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:28:16.864  [2024-12-09 17:17:39.845667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:16.864  [2024-12-09 17:17:39.847250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 335.752 ms, result 0
00:28:18.250  
[2024-12-09T17:17:42.234Z] Copying: 1472/1048576 [kB] (1472 kBps)
[2024-12-09T17:17:43.179Z] Copying: 4824/1048576 [kB] (3352 kBps)
[2024-12-09T17:17:44.126Z] Copying: 14/1024 [MB] (10 MBps)
[2024-12-09T17:17:45.071Z] Copying: 33/1024 [MB] (18 MBps)
[2024-12-09T17:17:46.455Z] Copying: 57/1024 [MB] (23 MBps)
[2024-12-09T17:17:47.397Z] Copying: 77/1024 [MB] (20 MBps)
[2024-12-09T17:17:48.339Z] Copying: 99/1024 [MB] (21 MBps)
[2024-12-09T17:17:49.284Z] Copying: 114/1024 [MB] (15 MBps)
[2024-12-09T17:17:50.228Z] Copying: 140/1024 [MB] (25 MBps)
[2024-12-09T17:17:51.172Z] Copying: 160/1024 [MB] (19 MBps)
[2024-12-09T17:17:52.129Z] Copying: 177/1024 [MB] (17 MBps)
[2024-12-09T17:17:53.076Z] Copying: 194/1024 [MB] (16 MBps)
[2024-12-09T17:17:54.465Z] Copying: 210/1024 [MB] (15 MBps)
[2024-12-09T17:17:55.038Z] Copying: 226/1024 [MB] (16 MBps)
[2024-12-09T17:17:56.424Z] Copying: 244/1024 [MB] (17 MBps)
[2024-12-09T17:17:57.368Z] Copying: 260/1024 [MB] (16 MBps)
[2024-12-09T17:17:58.312Z] Copying: 285/1024 [MB] (24 MBps)
[2024-12-09T17:17:59.257Z] Copying: 314/1024 [MB] (28 MBps)
[2024-12-09T17:18:00.201Z] Copying: 334/1024 [MB] (20 MBps)
[2024-12-09T17:18:01.147Z] Copying: 350/1024 [MB] (16 MBps)
[2024-12-09T17:18:02.092Z] Copying: 367/1024 [MB] (16 MBps)
[2024-12-09T17:18:03.035Z] Copying: 385/1024 [MB] (18 MBps)
[2024-12-09T17:18:04.424Z] Copying: 405/1024 [MB] (19 MBps)
[2024-12-09T17:18:05.368Z] Copying: 424/1024 [MB] (18 MBps)
[2024-12-09T17:18:06.313Z] Copying: 444/1024 [MB] (20 MBps)
[2024-12-09T17:18:07.283Z] Copying: 458/1024 [MB] (14 MBps)
[2024-12-09T17:18:08.262Z] Copying: 475/1024 [MB] (16 MBps)
[2024-12-09T17:18:09.205Z] Copying: 495/1024 [MB] (19 MBps)
[2024-12-09T17:18:10.153Z] Copying: 522/1024 [MB] (26 MBps)
[2024-12-09T17:18:11.102Z] Copying: 537/1024 [MB] (14 MBps)
[2024-12-09T17:18:12.049Z] Copying: 555/1024 [MB] (18 MBps)
[2024-12-09T17:18:13.440Z] Copying: 579/1024 [MB] (24 MBps)
[2024-12-09T17:18:14.383Z] Copying: 600/1024 [MB] (20 MBps)
[2024-12-09T17:18:15.327Z] Copying: 626/1024 [MB] (26 MBps)
[2024-12-09T17:18:16.273Z] Copying: 642/1024 [MB] (15 MBps)
[2024-12-09T17:18:17.218Z] Copying: 657/1024 [MB] (14 MBps)
[2024-12-09T17:18:18.164Z] Copying: 676/1024 [MB] (19 MBps)
[2024-12-09T17:18:19.109Z] Copying: 692/1024 [MB] (16 MBps)
[2024-12-09T17:18:20.054Z] Copying: 707/1024 [MB] (14 MBps)
[2024-12-09T17:18:21.443Z] Copying: 729/1024 [MB] (21 MBps)
[2024-12-09T17:18:22.387Z] Copying: 743/1024 [MB] (14 MBps)
[2024-12-09T17:18:23.357Z] Copying: 765/1024 [MB] (22 MBps)
[2024-12-09T17:18:24.320Z] Copying: 781/1024 [MB] (15 MBps)
[2024-12-09T17:18:25.266Z] Copying: 797/1024 [MB] (16 MBps)
[2024-12-09T17:18:26.211Z] Copying: 814/1024 [MB] (16 MBps)
[2024-12-09T17:18:27.156Z] Copying: 830/1024 [MB] (16 MBps)
[2024-12-09T17:18:28.099Z] Copying: 846/1024 [MB] (15 MBps)
[2024-12-09T17:18:29.042Z] Copying: 867/1024 [MB] (21 MBps)
[2024-12-09T17:18:30.431Z] Copying: 884/1024 [MB] (16 MBps)
[2024-12-09T17:18:31.375Z] Copying: 902/1024 [MB] (17 MBps)
[2024-12-09T17:18:32.317Z] Copying: 920/1024 [MB] (18 MBps)
[2024-12-09T17:18:33.261Z] Copying: 938/1024 [MB] (17 MBps)
[2024-12-09T17:18:34.204Z] Copying: 959/1024 [MB] (21 MBps)
[2024-12-09T17:18:35.148Z] Copying: 981/1024 [MB] (21 MBps)
[2024-12-09T17:18:36.086Z] Copying: 1001/1024 [MB] (20 MBps)
[2024-12-09T17:18:36.660Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-09 17:18:36.362211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.362328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:29:13.619  [2024-12-09 17:18:36.362349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:29:13.619  [2024-12-09 17:18:36.362360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.362388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:29:13.619  [2024-12-09 17:18:36.366149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.366194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:29:13.619  [2024-12-09 17:18:36.366208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.741 ms
00:29:13.619  [2024-12-09 17:18:36.366217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.366476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.366495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:29:13.619  [2024-12-09 17:18:36.366506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.228 ms
00:29:13.619  [2024-12-09 17:18:36.366515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.382121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.382247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:29:13.619  [2024-12-09 17:18:36.382262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.586 ms
00:29:13.619  [2024-12-09 17:18:36.382271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.389343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.389390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:29:13.619  [2024-12-09 17:18:36.389411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.029 ms
00:29:13.619  [2024-12-09 17:18:36.389419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.417932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.418005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:29:13.619  [2024-12-09 17:18:36.418027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.368 ms
00:29:13.619  [2024-12-09 17:18:36.418040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.435485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.435539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:29:13.619  [2024-12-09 17:18:36.435554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.370 ms
00:29:13.619  [2024-12-09 17:18:36.435564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.441327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.441381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:29:13.619  [2024-12-09 17:18:36.441394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.703 ms
00:29:13.619  [2024-12-09 17:18:36.441412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.467717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.467771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:29:13.619  [2024-12-09 17:18:36.467785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.287 ms
00:29:13.619  [2024-12-09 17:18:36.467793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.493905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.493957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:29:13.619  [2024-12-09 17:18:36.493969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.063 ms
00:29:13.619  [2024-12-09 17:18:36.493977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.518866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.518917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:29:13.619  [2024-12-09 17:18:36.518929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.839 ms
00:29:13.619  [2024-12-09 17:18:36.518937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.543582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.619  [2024-12-09 17:18:36.543632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:29:13.619  [2024-12-09 17:18:36.543644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.568 ms
00:29:13.619  [2024-12-09 17:18:36.543652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.619  [2024-12-09 17:18:36.543697] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:29:13.619  [2024-12-09 17:18:36.543714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:29:13.619  [2024-12-09 17:18:36.543727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:     1792 / 261120 	wr_cnt: 1	state: open
00:29:13.619  [2024-12-09 17:18:36.543736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.543991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.619  [2024-12-09 17:18:36.544000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:29:13.620  [2024-12-09 17:18:36.544568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:29:13.620  [2024-12-09 17:18:36.544577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         733c0fa8-9e02-4a21-8416-115b8afc7a4a
00:29:13.620  [2024-12-09 17:18:36.544586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    262912
00:29:13.620  [2024-12-09 17:18:36.544594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        170432
00:29:13.620  [2024-12-09 17:18:36.544608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         168448
00:29:13.620  [2024-12-09 17:18:36.544617] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0118
00:29:13.620  [2024-12-09 17:18:36.544625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:29:13.620  [2024-12-09 17:18:36.544643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:29:13.620  [2024-12-09 17:18:36.544652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:29:13.620  [2024-12-09 17:18:36.544668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:29:13.620  [2024-12-09 17:18:36.544675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:29:13.620  [2024-12-09 17:18:36.544684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.620  [2024-12-09 17:18:36.544693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:29:13.620  [2024-12-09 17:18:36.544703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.989 ms
00:29:13.620  [2024-12-09 17:18:36.544712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.620  [2024-12-09 17:18:36.559359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.620  [2024-12-09 17:18:36.559405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:29:13.620  [2024-12-09 17:18:36.559416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.628 ms
00:29:13.620  [2024-12-09 17:18:36.559425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.620  [2024-12-09 17:18:36.559878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:13.620  [2024-12-09 17:18:36.559890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:29:13.620  [2024-12-09 17:18:36.559901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.414 ms
00:29:13.620  [2024-12-09 17:18:36.559909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.620  [2024-12-09 17:18:36.599320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.620  [2024-12-09 17:18:36.599369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:29:13.620  [2024-12-09 17:18:36.599381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.620  [2024-12-09 17:18:36.599391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.620  [2024-12-09 17:18:36.599458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.620  [2024-12-09 17:18:36.599467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:29:13.620  [2024-12-09 17:18:36.599478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.620  [2024-12-09 17:18:36.599487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.620  [2024-12-09 17:18:36.599579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.621  [2024-12-09 17:18:36.599591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:29:13.621  [2024-12-09 17:18:36.599601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.621  [2024-12-09 17:18:36.599609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.621  [2024-12-09 17:18:36.599627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.621  [2024-12-09 17:18:36.599636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:29:13.621  [2024-12-09 17:18:36.599645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.621  [2024-12-09 17:18:36.599654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.691628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.691694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:29:13.882  [2024-12-09 17:18:36.691710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.691719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:29:13.882  [2024-12-09 17:18:36.766286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:29:13.882  [2024-12-09 17:18:36.766400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:29:13.882  [2024-12-09 17:18:36.766506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:29:13.882  [2024-12-09 17:18:36.766647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:29:13.882  [2024-12-09 17:18:36.766711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:29:13.882  [2024-12-09 17:18:36.766792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.766888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:13.882  [2024-12-09 17:18:36.766900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:29:13.882  [2024-12-09 17:18:36.766909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:13.882  [2024-12-09 17:18:36.766918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:13.882  [2024-12-09 17:18:36.767095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.839 ms, result 0
00:29:14.826  
00:29:14.826  
00:29:14.826   17:18:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:29:16.744  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:29:16.744   17:18:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:29:16.744  [2024-12-09 17:18:39.748992] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:29:16.744  [2024-12-09 17:18:39.749099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ]
00:29:17.006  [2024-12-09 17:18:39.905790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:17.267  [2024-12-09 17:18:40.059637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:17.530  [2024-12-09 17:18:40.398108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:29:17.530  [2024-12-09 17:18:40.398219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:29:17.530  [2024-12-09 17:18:40.564812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.530  [2024-12-09 17:18:40.564905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:29:17.530  [2024-12-09 17:18:40.564923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:29:17.530  [2024-12-09 17:18:40.564934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.530  [2024-12-09 17:18:40.564996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.530  [2024-12-09 17:18:40.565010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:29:17.530  [2024-12-09 17:18:40.565021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.041 ms
00:29:17.530  [2024-12-09 17:18:40.565030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.530  [2024-12-09 17:18:40.565055] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:29:17.530  [2024-12-09 17:18:40.565781] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:29:17.530  [2024-12-09 17:18:40.565800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.530  [2024-12-09 17:18:40.565810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:29:17.530  [2024-12-09 17:18:40.565819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.752 ms
00:29:17.530  [2024-12-09 17:18:40.565830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.530  [2024-12-09 17:18:40.568121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:29:17.792  [2024-12-09 17:18:40.583832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.792  [2024-12-09 17:18:40.583892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:29:17.792  [2024-12-09 17:18:40.583907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.713 ms
00:29:17.792  [2024-12-09 17:18:40.583916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.792  [2024-12-09 17:18:40.584001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.584013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:29:17.793  [2024-12-09 17:18:40.584023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:29:17.793  [2024-12-09 17:18:40.584032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.595473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.595522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:29:17.793  [2024-12-09 17:18:40.595535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.359 ms
00:29:17.793  [2024-12-09 17:18:40.595550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.595644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.595654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:29:17.793  [2024-12-09 17:18:40.595665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:29:17.793  [2024-12-09 17:18:40.595676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.595734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.595747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:29:17.793  [2024-12-09 17:18:40.595757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:29:17.793  [2024-12-09 17:18:40.595766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.595794] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:29:17.793  [2024-12-09 17:18:40.600561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.600606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:29:17.793  [2024-12-09 17:18:40.600622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.773 ms
00:29:17.793  [2024-12-09 17:18:40.600632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.600675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.600685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:29:17.793  [2024-12-09 17:18:40.600695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:29:17.793  [2024-12-09 17:18:40.600704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.600743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:29:17.793  [2024-12-09 17:18:40.600774] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:29:17.793  [2024-12-09 17:18:40.600815] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:29:17.793  [2024-12-09 17:18:40.600839] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:29:17.793  [2024-12-09 17:18:40.600972] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:29:17.793  [2024-12-09 17:18:40.600984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:29:17.793  [2024-12-09 17:18:40.600995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:29:17.793  [2024-12-09 17:18:40.601008] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601018] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601027] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:29:17.793  [2024-12-09 17:18:40.601036] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:29:17.793  [2024-12-09 17:18:40.601048] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:29:17.793  [2024-12-09 17:18:40.601056] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:29:17.793  [2024-12-09 17:18:40.601065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.601075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:29:17.793  [2024-12-09 17:18:40.601085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.327 ms
00:29:17.793  [2024-12-09 17:18:40.601093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.601184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.793  [2024-12-09 17:18:40.601194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:29:17.793  [2024-12-09 17:18:40.601203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.077 ms
00:29:17.793  [2024-12-09 17:18:40.601213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.793  [2024-12-09 17:18:40.601329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:29:17.793  [2024-12-09 17:18:40.601348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:29:17.793  [2024-12-09 17:18:40.601358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:29:17.793  [2024-12-09 17:18:40.601385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:29:17.793  [2024-12-09 17:18:40.601408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:29:17.793  [2024-12-09 17:18:40.601423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:29:17.793  [2024-12-09 17:18:40.601436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:29:17.793  [2024-12-09 17:18:40.601445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:29:17.793  [2024-12-09 17:18:40.601462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:29:17.793  [2024-12-09 17:18:40.601470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:29:17.793  [2024-12-09 17:18:40.601477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:29:17.793  [2024-12-09 17:18:40.601492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:29:17.793  [2024-12-09 17:18:40.601515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:29:17.793  [2024-12-09 17:18:40.601536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:29:17.793  [2024-12-09 17:18:40.601558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:29:17.793  [2024-12-09 17:18:40.601580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:29:17.793  [2024-12-09 17:18:40.601601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:29:17.793  [2024-12-09 17:18:40.601614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:29:17.793  [2024-12-09 17:18:40.601620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:29:17.793  [2024-12-09 17:18:40.601627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:29:17.793  [2024-12-09 17:18:40.601637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:29:17.793  [2024-12-09 17:18:40.601644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:29:17.793  [2024-12-09 17:18:40.601650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:29:17.793  [2024-12-09 17:18:40.601663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:29:17.793  [2024-12-09 17:18:40.601669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601677] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:29:17.793  [2024-12-09 17:18:40.601686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:29:17.793  [2024-12-09 17:18:40.601697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:17.793  [2024-12-09 17:18:40.601713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:29:17.793  [2024-12-09 17:18:40.601721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:29:17.793  [2024-12-09 17:18:40.601728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:29:17.793  [2024-12-09 17:18:40.601735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:29:17.793  [2024-12-09 17:18:40.601742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:29:17.793  [2024-12-09 17:18:40.601751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:29:17.793  [2024-12-09 17:18:40.601762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:29:17.793  [2024-12-09 17:18:40.601772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:29:17.793  [2024-12-09 17:18:40.601786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:29:17.793  [2024-12-09 17:18:40.601793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:29:17.793  [2024-12-09 17:18:40.601801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:29:17.793  [2024-12-09 17:18:40.601808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:29:17.793  [2024-12-09 17:18:40.601815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:29:17.794  [2024-12-09 17:18:40.601823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:29:17.794  [2024-12-09 17:18:40.601832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:29:17.794  [2024-12-09 17:18:40.601839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:29:17.794  [2024-12-09 17:18:40.601868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:29:17.794  [2024-12-09 17:18:40.601877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:29:17.794  [2024-12-09 17:18:40.601893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:29:17.794  [2024-12-09 17:18:40.601901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:29:17.794  [2024-12-09 17:18:40.601910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:29:17.794  [2024-12-09 17:18:40.601917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:29:17.794  [2024-12-09 17:18:40.601925] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:29:17.794  [2024-12-09 17:18:40.601935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:29:17.794  [2024-12-09 17:18:40.601944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:29:17.794  [2024-12-09 17:18:40.601952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:29:17.794  [2024-12-09 17:18:40.601959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:29:17.794  [2024-12-09 17:18:40.601967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:29:17.794  [2024-12-09 17:18:40.601978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.601987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:29:17.794  [2024-12-09 17:18:40.601997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.719 ms
00:29:17.794  [2024-12-09 17:18:40.602005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.640290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.640347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:29:17.794  [2024-12-09 17:18:40.640370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.236 ms
00:29:17.794  [2024-12-09 17:18:40.640384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.640482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.640492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:29:17.794  [2024-12-09 17:18:40.640503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.070 ms
00:29:17.794  [2024-12-09 17:18:40.640511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.689715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.689777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:29:17.794  [2024-12-09 17:18:40.689792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 49.136 ms
00:29:17.794  [2024-12-09 17:18:40.689801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.689867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.689879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:29:17.794  [2024-12-09 17:18:40.689894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:29:17.794  [2024-12-09 17:18:40.689902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.690674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.690723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:29:17.794  [2024-12-09 17:18:40.690741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.679 ms
00:29:17.794  [2024-12-09 17:18:40.690755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.691018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.691040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:29:17.794  [2024-12-09 17:18:40.691065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.216 ms
00:29:17.794  [2024-12-09 17:18:40.691080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.709765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.709811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:29:17.794  [2024-12-09 17:18:40.709823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.645 ms
00:29:17.794  [2024-12-09 17:18:40.709832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.724958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:29:17.794  [2024-12-09 17:18:40.725008] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:29:17.794  [2024-12-09 17:18:40.725024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.725034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:29:17.794  [2024-12-09 17:18:40.725044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.055 ms
00:29:17.794  [2024-12-09 17:18:40.725052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.751305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.751355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:29:17.794  [2024-12-09 17:18:40.751369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.196 ms
00:29:17.794  [2024-12-09 17:18:40.751380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.764486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.764529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:29:17.794  [2024-12-09 17:18:40.764541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.035 ms
00:29:17.794  [2024-12-09 17:18:40.764550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.777240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.777286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:29:17.794  [2024-12-09 17:18:40.777298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.642 ms
00:29:17.794  [2024-12-09 17:18:40.777307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:17.794  [2024-12-09 17:18:40.778021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:17.794  [2024-12-09 17:18:40.778057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:29:17.794  [2024-12-09 17:18:40.778074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.601 ms
00:29:17.794  [2024-12-09 17:18:40.778084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.851727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.851791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:29:18.055  [2024-12-09 17:18:40.851814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 73.621 ms
00:29:18.055  [2024-12-09 17:18:40.851824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.864667] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:29:18.055  [2024-12-09 17:18:40.868651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.868696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:29:18.055  [2024-12-09 17:18:40.868711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.757 ms
00:29:18.055  [2024-12-09 17:18:40.868722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.868812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.868827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:29:18.055  [2024-12-09 17:18:40.868841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:29:18.055  [2024-12-09 17:18:40.868868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.869988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.870044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:29:18.055  [2024-12-09 17:18:40.870057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.077 ms
00:29:18.055  [2024-12-09 17:18:40.870069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.870106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.870119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:29:18.055  [2024-12-09 17:18:40.870129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:29:18.055  [2024-12-09 17:18:40.870138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.870189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:29:18.055  [2024-12-09 17:18:40.870204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.870213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:29:18.055  [2024-12-09 17:18:40.870223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.017 ms
00:29:18.055  [2024-12-09 17:18:40.870234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.896607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.896662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:29:18.055  [2024-12-09 17:18:40.896683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.352 ms
00:29:18.055  [2024-12-09 17:18:40.896692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.896790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:18.055  [2024-12-09 17:18:40.896805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:29:18.055  [2024-12-09 17:18:40.896814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.049 ms
00:29:18.055  [2024-12-09 17:18:40.896824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:18.055  [2024-12-09 17:18:40.898349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.933 ms, result 0
00:29:19.445  
[2024-12-09T17:18:43.432Z] Copying: 12/1024 [MB] (12 MBps)
[2024-12-09T17:18:44.374Z] Copying: 22/1024 [MB] (10 MBps)
[2024-12-09T17:18:45.320Z] Copying: 38/1024 [MB] (15 MBps)
[2024-12-09T17:18:46.264Z] Copying: 59/1024 [MB] (21 MBps)
[2024-12-09T17:18:47.209Z] Copying: 71/1024 [MB] (11 MBps)
[2024-12-09T17:18:48.155Z] Copying: 86/1024 [MB] (14 MBps)
[2024-12-09T17:18:49.100Z] Copying: 99/1024 [MB] (13 MBps)
[2024-12-09T17:18:50.487Z] Copying: 110/1024 [MB] (11 MBps)
[2024-12-09T17:18:51.429Z] Copying: 125/1024 [MB] (14 MBps)
[2024-12-09T17:18:52.371Z] Copying: 145/1024 [MB] (20 MBps)
[2024-12-09T17:18:53.377Z] Copying: 159/1024 [MB] (13 MBps)
[2024-12-09T17:18:54.322Z] Copying: 179/1024 [MB] (19 MBps)
[2024-12-09T17:18:55.263Z] Copying: 196/1024 [MB] (17 MBps)
[2024-12-09T17:18:56.206Z] Copying: 225/1024 [MB] (28 MBps)
[2024-12-09T17:18:57.150Z] Copying: 247/1024 [MB] (21 MBps)
[2024-12-09T17:18:58.092Z] Copying: 263/1024 [MB] (16 MBps)
[2024-12-09T17:18:59.484Z] Copying: 280/1024 [MB] (16 MBps)
[2024-12-09T17:19:00.425Z] Copying: 298/1024 [MB] (18 MBps)
[2024-12-09T17:19:01.366Z] Copying: 312/1024 [MB] (14 MBps)
[2024-12-09T17:19:02.309Z] Copying: 336/1024 [MB] (23 MBps)
[2024-12-09T17:19:03.251Z] Copying: 352/1024 [MB] (16 MBps)
[2024-12-09T17:19:04.190Z] Copying: 368/1024 [MB] (16 MBps)
[2024-12-09T17:19:05.134Z] Copying: 400/1024 [MB] (31 MBps)
[2024-12-09T17:19:06.521Z] Copying: 418/1024 [MB] (18 MBps)
[2024-12-09T17:19:07.095Z] Copying: 435/1024 [MB] (16 MBps)
[2024-12-09T17:19:08.105Z] Copying: 451/1024 [MB] (15 MBps)
[2024-12-09T17:19:09.495Z] Copying: 464/1024 [MB] (13 MBps)
[2024-12-09T17:19:10.441Z] Copying: 476/1024 [MB] (11 MBps)
[2024-12-09T17:19:11.388Z] Copying: 487/1024 [MB] (11 MBps)
[2024-12-09T17:19:12.333Z] Copying: 499/1024 [MB] (11 MBps)
[2024-12-09T17:19:13.280Z] Copying: 512/1024 [MB] (12 MBps)
[2024-12-09T17:19:14.225Z] Copying: 525/1024 [MB] (13 MBps)
[2024-12-09T17:19:15.170Z] Copying: 538/1024 [MB] (13 MBps)
[2024-12-09T17:19:16.115Z] Copying: 554/1024 [MB] (15 MBps)
[2024-12-09T17:19:17.504Z] Copying: 564/1024 [MB] (10 MBps)
[2024-12-09T17:19:18.450Z] Copying: 583/1024 [MB] (18 MBps)
[2024-12-09T17:19:19.395Z] Copying: 607104/1048576 [kB] (10040 kBps)
[2024-12-09T17:19:20.340Z] Copying: 604/1024 [MB] (11 MBps)
[2024-12-09T17:19:21.283Z] Copying: 615/1024 [MB] (10 MBps)
[2024-12-09T17:19:22.282Z] Copying: 626/1024 [MB] (11 MBps)
[2024-12-09T17:19:23.226Z] Copying: 638/1024 [MB] (12 MBps)
[2024-12-09T17:19:24.170Z] Copying: 663292/1048576 [kB] (9516 kBps)
[2024-12-09T17:19:25.116Z] Copying: 672720/1048576 [kB] (9428 kBps)
[2024-12-09T17:19:26.503Z] Copying: 668/1024 [MB] (11 MBps)
[2024-12-09T17:19:27.448Z] Copying: 694344/1048576 [kB] (9908 kBps)
[2024-12-09T17:19:28.389Z] Copying: 689/1024 [MB] (11 MBps)
[2024-12-09T17:19:29.335Z] Copying: 705/1024 [MB] (16 MBps)
[2024-12-09T17:19:30.277Z] Copying: 719/1024 [MB] (13 MBps)
[2024-12-09T17:19:31.221Z] Copying: 738/1024 [MB] (18 MBps)
[2024-12-09T17:19:32.164Z] Copying: 749/1024 [MB] (11 MBps)
[2024-12-09T17:19:33.109Z] Copying: 769/1024 [MB] (19 MBps)
[2024-12-09T17:19:34.496Z] Copying: 791/1024 [MB] (22 MBps)
[2024-12-09T17:19:35.438Z] Copying: 807/1024 [MB] (15 MBps)
[2024-12-09T17:19:36.381Z] Copying: 831/1024 [MB] (24 MBps)
[2024-12-09T17:19:37.366Z] Copying: 843/1024 [MB] (11 MBps)
[2024-12-09T17:19:38.308Z] Copying: 860/1024 [MB] (16 MBps)
[2024-12-09T17:19:39.253Z] Copying: 871/1024 [MB] (11 MBps)
[2024-12-09T17:19:40.199Z] Copying: 886/1024 [MB] (14 MBps)
[2024-12-09T17:19:41.143Z] Copying: 900/1024 [MB] (14 MBps)
[2024-12-09T17:19:42.087Z] Copying: 912/1024 [MB] (11 MBps)
[2024-12-09T17:19:43.476Z] Copying: 924/1024 [MB] (11 MBps)
[2024-12-09T17:19:44.420Z] Copying: 935/1024 [MB] (11 MBps)
[2024-12-09T17:19:45.363Z] Copying: 949/1024 [MB] (14 MBps)
[2024-12-09T17:19:46.307Z] Copying: 960/1024 [MB] (10 MBps)
[2024-12-09T17:19:47.251Z] Copying: 993520/1048576 [kB] (10216 kBps)
[2024-12-09T17:19:48.192Z] Copying: 988/1024 [MB] (17 MBps)
[2024-12-09T17:19:49.136Z] Copying: 1001/1024 [MB] (13 MBps)
[2024-12-09T17:19:49.136Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 17:19:49.084810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.095  [2024-12-09 17:19:49.084920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:30:26.095  [2024-12-09 17:19:49.084947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:30:26.095  [2024-12-09 17:19:49.084962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.095  [2024-12-09 17:19:49.084989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:30:26.095  [2024-12-09 17:19:49.088261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.095  [2024-12-09 17:19:49.088320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:30:26.095  [2024-12-09 17:19:49.088332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.252 ms
00:30:26.095  [2024-12-09 17:19:49.088341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.095  [2024-12-09 17:19:49.088640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.095  [2024-12-09 17:19:49.088655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:30:26.095  [2024-12-09 17:19:49.088666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.238 ms
00:30:26.095  [2024-12-09 17:19:49.088674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.095  [2024-12-09 17:19:49.092142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.095  [2024-12-09 17:19:49.092171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:30:26.095  [2024-12-09 17:19:49.092183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.455 ms
00:30:26.095  [2024-12-09 17:19:49.092197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.095  [2024-12-09 17:19:49.099661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.095  [2024-12-09 17:19:49.099717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:30:26.095  [2024-12-09 17:19:49.099731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.443 ms
00:30:26.095  [2024-12-09 17:19:49.099740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.095  [2024-12-09 17:19:49.128256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.095  [2024-12-09 17:19:49.128312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:30:26.095  [2024-12-09 17:19:49.128326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.437 ms
00:30:26.095  [2024-12-09 17:19:49.128334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.145758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.358  [2024-12-09 17:19:49.145818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:30:26.358  [2024-12-09 17:19:49.145832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.354 ms
00:30:26.358  [2024-12-09 17:19:49.145841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.151327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.358  [2024-12-09 17:19:49.151386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:30:26.358  [2024-12-09 17:19:49.151399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.397 ms
00:30:26.358  [2024-12-09 17:19:49.151407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.178469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.358  [2024-12-09 17:19:49.178526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:30:26.358  [2024-12-09 17:19:49.178539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.044 ms
00:30:26.358  [2024-12-09 17:19:49.178546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.205516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.358  [2024-12-09 17:19:49.205571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:30:26.358  [2024-12-09 17:19:49.205584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.916 ms
00:30:26.358  [2024-12-09 17:19:49.205592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.232106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.358  [2024-12-09 17:19:49.232156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:30:26.358  [2024-12-09 17:19:49.232169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.461 ms
00:30:26.358  [2024-12-09 17:19:49.232177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.258284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.358  [2024-12-09 17:19:49.258342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:30:26.358  [2024-12-09 17:19:49.258356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.008 ms
00:30:26.358  [2024-12-09 17:19:49.258363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.358  [2024-12-09 17:19:49.258415] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:30:26.358  [2024-12-09 17:19:49.258442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:30:26.358  [2024-12-09 17:19:49.258458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:     1792 / 261120 	wr_cnt: 1	state: open
00:30:26.358  [2024-12-09 17:19:49.258468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.358  [2024-12-09 17:19:49.258477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.358  [2024-12-09 17:19:49.258487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.258997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.359  [2024-12-09 17:19:49.259240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.360  [2024-12-09 17:19:49.259248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.360  [2024-12-09 17:19:49.259255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.360  [2024-12-09 17:19:49.259263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:30:26.360  [2024-12-09 17:19:49.259281] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:30:26.360  [2024-12-09 17:19:49.259289] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         733c0fa8-9e02-4a21-8416-115b8afc7a4a
00:30:26.360  [2024-12-09 17:19:49.259298] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    262912
00:30:26.360  [2024-12-09 17:19:49.259305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:30:26.360  [2024-12-09 17:19:49.259312] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:30:26.360  [2024-12-09 17:19:49.259320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:30:26.360  [2024-12-09 17:19:49.259337] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:30:26.360  [2024-12-09 17:19:49.259346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:30:26.360  [2024-12-09 17:19:49.259354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:30:26.360  [2024-12-09 17:19:49.259361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:30:26.360  [2024-12-09 17:19:49.259367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:30:26.360  [2024-12-09 17:19:49.259374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.360  [2024-12-09 17:19:49.259382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:30:26.360  [2024-12-09 17:19:49.259392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.961 ms
00:30:26.360  [2024-12-09 17:19:49.259403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.360  [2024-12-09 17:19:49.274245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.360  [2024-12-09 17:19:49.274286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:30:26.360  [2024-12-09 17:19:49.274299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.798 ms
00:30:26.360  [2024-12-09 17:19:49.274308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.360  [2024-12-09 17:19:49.274736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:26.360  [2024-12-09 17:19:49.274768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:30:26.360  [2024-12-09 17:19:49.274782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.404 ms
00:30:26.360  [2024-12-09 17:19:49.274796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.360  [2024-12-09 17:19:49.315879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.360  [2024-12-09 17:19:49.315950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:26.360  [2024-12-09 17:19:49.315967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.360  [2024-12-09 17:19:49.315978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.360  [2024-12-09 17:19:49.316067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.360  [2024-12-09 17:19:49.316086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:26.360  [2024-12-09 17:19:49.316097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.360  [2024-12-09 17:19:49.316106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.360  [2024-12-09 17:19:49.316219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.360  [2024-12-09 17:19:49.316231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:26.360  [2024-12-09 17:19:49.316240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.360  [2024-12-09 17:19:49.316249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.360  [2024-12-09 17:19:49.316267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.360  [2024-12-09 17:19:49.316276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:26.360  [2024-12-09 17:19:49.316288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.360  [2024-12-09 17:19:49.316296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.409027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.409099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:26.622  [2024-12-09 17:19:49.409115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.409124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.484385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.484477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:26.622  [2024-12-09 17:19:49.484493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.484502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.484587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.484598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:26.622  [2024-12-09 17:19:49.484609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.484618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.484700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.484711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:26.622  [2024-12-09 17:19:49.484721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.484733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.484878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.484895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:26.622  [2024-12-09 17:19:49.484909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.484924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.484967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.484978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:30:26.622  [2024-12-09 17:19:49.484987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.484996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.485056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.485067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:26.622  [2024-12-09 17:19:49.485076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.485085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.485143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:26.622  [2024-12-09 17:19:49.485154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:26.622  [2024-12-09 17:19:49.485164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:26.622  [2024-12-09 17:19:49.485176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:26.622  [2024-12-09 17:19:49.485346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.482 ms, result 0
00:30:27.565  
00:30:27.566  
00:30:27.566   17:19:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5
00:30:29.504  /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK
00:30:29.504   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT
00:30:29.504   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill
00:30:29.504   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:30:29.504   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:30:29.504   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82082
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82082 ']'
00:30:29.766  Process with pid 82082 is not found
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82082
00:30:29.766  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82082) - No such process
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82082 is not found'
00:30:29.766   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd
00:30:30.027  Remove shared memory files
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f
00:30:30.027  
00:30:30.027  real	4m21.957s
00:30:30.027  user	4m47.534s
00:30:30.027  sys	0m26.856s
00:30:30.027  ************************************
00:30:30.027  END TEST ftl_dirty_shutdown
00:30:30.027  ************************************
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:30.027   17:19:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x
00:30:30.027   17:19:53 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0
00:30:30.027   17:19:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:30:30.027   17:19:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:30.027   17:19:53 ftl -- common/autotest_common.sh@10 -- # set +x
00:30:30.027  ************************************
00:30:30.027  START TEST ftl_upgrade_shutdown
00:30:30.027  ************************************
00:30:30.027   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0
00:30:30.286  * Looking for test storage...
00:30:30.286  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:30.286  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:30.286  		--rc genhtml_branch_coverage=1
00:30:30.286  		--rc genhtml_function_coverage=1
00:30:30.286  		--rc genhtml_legend=1
00:30:30.286  		--rc geninfo_all_blocks=1
00:30:30.286  		--rc geninfo_unexecuted_blocks=1
00:30:30.286  		
00:30:30.286  		'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:30.286  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:30.286  		--rc genhtml_branch_coverage=1
00:30:30.286  		--rc genhtml_function_coverage=1
00:30:30.286  		--rc genhtml_legend=1
00:30:30.286  		--rc geninfo_all_blocks=1
00:30:30.286  		--rc geninfo_unexecuted_blocks=1
00:30:30.286  		
00:30:30.286  		'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:30.286  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:30.286  		--rc genhtml_branch_coverage=1
00:30:30.286  		--rc genhtml_function_coverage=1
00:30:30.286  		--rc genhtml_legend=1
00:30:30.286  		--rc geninfo_all_blocks=1
00:30:30.286  		--rc geninfo_unexecuted_blocks=1
00:30:30.286  		
00:30:30.286  		'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:30.286  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:30.286  		--rc genhtml_branch_coverage=1
00:30:30.286  		--rc genhtml_function_coverage=1
00:30:30.286  		--rc genhtml_legend=1
00:30:30.286  		--rc geninfo_all_blocks=1
00:30:30.286  		--rc geninfo_unexecuted_blocks=1
00:30:30.286  		
00:30:30.286  		'
00:30:30.286   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:30:30.286      17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:30:30.286     17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:30:30.286    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid=
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:30:30.287    17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84867
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84867
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84867 ']'
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]'
00:30:30.287  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:30.287   17:19:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:30:30.287  [2024-12-09 17:19:53.271709] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:30:30.287  [2024-12-09 17:19:53.271825] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84867 ]
00:30:30.548  [2024-12-09 17:19:53.426931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:30.548  [2024-12-09 17:19:53.519122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT')
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]]
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]]
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]]
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]]
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]]
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:31.121   17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]]
00:30:31.121    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480
00:30:31.121    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base
00:30:31.121    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:30:31.121    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480
00:30:31.121    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev
00:30:31.121     17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0
00:30:31.383    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1
00:30:31.383    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size
00:30:31.383     17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1
00:30:31.383     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1
00:30:31.383     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:30:31.383     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:30:31.383     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:30:31.383      17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:30:31.644    {
00:30:31.644      "name": "basen1",
00:30:31.644      "aliases": [
00:30:31.644        "a2d03da6-032e-4317-9088-e7e8260788df"
00:30:31.644      ],
00:30:31.644      "product_name": "NVMe disk",
00:30:31.644      "block_size": 4096,
00:30:31.644      "num_blocks": 1310720,
00:30:31.644      "uuid": "a2d03da6-032e-4317-9088-e7e8260788df",
00:30:31.644      "numa_id": -1,
00:30:31.644      "assigned_rate_limits": {
00:30:31.644        "rw_ios_per_sec": 0,
00:30:31.644        "rw_mbytes_per_sec": 0,
00:30:31.644        "r_mbytes_per_sec": 0,
00:30:31.644        "w_mbytes_per_sec": 0
00:30:31.644      },
00:30:31.644      "claimed": true,
00:30:31.644      "claim_type": "read_many_write_one",
00:30:31.644      "zoned": false,
00:30:31.644      "supported_io_types": {
00:30:31.644        "read": true,
00:30:31.644        "write": true,
00:30:31.644        "unmap": true,
00:30:31.644        "flush": true,
00:30:31.644        "reset": true,
00:30:31.644        "nvme_admin": true,
00:30:31.644        "nvme_io": true,
00:30:31.644        "nvme_io_md": false,
00:30:31.644        "write_zeroes": true,
00:30:31.644        "zcopy": false,
00:30:31.644        "get_zone_info": false,
00:30:31.644        "zone_management": false,
00:30:31.644        "zone_append": false,
00:30:31.644        "compare": true,
00:30:31.644        "compare_and_write": false,
00:30:31.644        "abort": true,
00:30:31.644        "seek_hole": false,
00:30:31.644        "seek_data": false,
00:30:31.644        "copy": true,
00:30:31.644        "nvme_iov_md": false
00:30:31.644      },
00:30:31.644      "driver_specific": {
00:30:31.644        "nvme": [
00:30:31.644          {
00:30:31.644            "pci_address": "0000:00:11.0",
00:30:31.644            "trid": {
00:30:31.644              "trtype": "PCIe",
00:30:31.644              "traddr": "0000:00:11.0"
00:30:31.644            },
00:30:31.644            "ctrlr_data": {
00:30:31.644              "cntlid": 0,
00:30:31.644              "vendor_id": "0x1b36",
00:30:31.644              "model_number": "QEMU NVMe Ctrl",
00:30:31.644              "serial_number": "12341",
00:30:31.644              "firmware_revision": "8.0.0",
00:30:31.644              "subnqn": "nqn.2019-08.org.qemu:12341",
00:30:31.644              "oacs": {
00:30:31.644                "security": 0,
00:30:31.644                "format": 1,
00:30:31.644                "firmware": 0,
00:30:31.644                "ns_manage": 1
00:30:31.644              },
00:30:31.644              "multi_ctrlr": false,
00:30:31.644              "ana_reporting": false
00:30:31.644            },
00:30:31.644            "vs": {
00:30:31.644              "nvme_version": "1.4"
00:30:31.644            },
00:30:31.644            "ns_data": {
00:30:31.644              "id": 1,
00:30:31.644              "can_share": false
00:30:31.644            }
00:30:31.644          }
00:30:31.644        ],
00:30:31.644        "mp_policy": "active_passive"
00:30:31.644      }
00:30:31.644    }
00:30:31.644  ]'
00:30:31.644      17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:30:31.644      17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120
00:30:31.644    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120
00:30:31.644    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]]
00:30:31.644    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:30:31.644     17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:30:31.905    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a59fdfcc-7132-44d4-92eb-d71f831eb4a4
00:30:31.905    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores
00:30:31.905    17:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a59fdfcc-7132-44d4-92eb-d71f831eb4a4
00:30:32.165     17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs
00:30:32.425    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d7521178-9514-4bcb-8b6b-0f1c79d118a5
00:30:32.425    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d7521178-9514-4bcb-8b6b-0f1c79d118a5
00:30:32.687   17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d961cdfb-473a-4acd-bde5-9dd2266fc007
00:30:32.687   17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d961cdfb-473a-4acd-bde5-9dd2266fc007 ]]
00:30:32.687    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d961cdfb-473a-4acd-bde5-9dd2266fc007 5120
00:30:32.687    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache
00:30:32.687    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:30:32.687    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d961cdfb-473a-4acd-bde5-9dd2266fc007
00:30:32.687    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120
00:30:32.687     17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d961cdfb-473a-4acd-bde5-9dd2266fc007
00:30:32.687     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d961cdfb-473a-4acd-bde5-9dd2266fc007
00:30:32.687     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:30:32.687     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:30:32.687     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:30:32.687      17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d961cdfb-473a-4acd-bde5-9dd2266fc007
00:30:32.687     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:30:32.687    {
00:30:32.687      "name": "d961cdfb-473a-4acd-bde5-9dd2266fc007",
00:30:32.687      "aliases": [
00:30:32.687        "lvs/basen1p0"
00:30:32.687      ],
00:30:32.687      "product_name": "Logical Volume",
00:30:32.687      "block_size": 4096,
00:30:32.687      "num_blocks": 5242880,
00:30:32.687      "uuid": "d961cdfb-473a-4acd-bde5-9dd2266fc007",
00:30:32.687      "assigned_rate_limits": {
00:30:32.687        "rw_ios_per_sec": 0,
00:30:32.687        "rw_mbytes_per_sec": 0,
00:30:32.687        "r_mbytes_per_sec": 0,
00:30:32.687        "w_mbytes_per_sec": 0
00:30:32.687      },
00:30:32.687      "claimed": false,
00:30:32.687      "zoned": false,
00:30:32.687      "supported_io_types": {
00:30:32.687        "read": true,
00:30:32.687        "write": true,
00:30:32.687        "unmap": true,
00:30:32.687        "flush": false,
00:30:32.687        "reset": true,
00:30:32.687        "nvme_admin": false,
00:30:32.687        "nvme_io": false,
00:30:32.687        "nvme_io_md": false,
00:30:32.687        "write_zeroes": true,
00:30:32.687        "zcopy": false,
00:30:32.687        "get_zone_info": false,
00:30:32.687        "zone_management": false,
00:30:32.687        "zone_append": false,
00:30:32.687        "compare": false,
00:30:32.687        "compare_and_write": false,
00:30:32.687        "abort": false,
00:30:32.687        "seek_hole": true,
00:30:32.687        "seek_data": true,
00:30:32.687        "copy": false,
00:30:32.687        "nvme_iov_md": false
00:30:32.687      },
00:30:32.687      "driver_specific": {
00:30:32.687        "lvol": {
00:30:32.687          "lvol_store_uuid": "d7521178-9514-4bcb-8b6b-0f1c79d118a5",
00:30:32.687          "base_bdev": "basen1",
00:30:32.687          "thin_provision": true,
00:30:32.687          "num_allocated_clusters": 0,
00:30:32.687          "snapshot": false,
00:30:32.687          "clone": false,
00:30:32.687          "esnap_clone": false
00:30:32.687        }
00:30:32.687      }
00:30:32.687    }
00:30:32.687  ]'
00:30:32.687      17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:30:32.948     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:30:32.948      17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:30:32.948     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880
00:30:32.948     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480
00:30:32.948     17:19:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480
00:30:32.948    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024
00:30:32.948    17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev
00:30:32.948     17:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0
00:30:33.209    17:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1
00:30:33.209    17:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]]
00:30:33.209    17:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1
00:30:33.209   17:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0
00:30:33.209   17:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]]
00:30:33.209   17:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d961cdfb-473a-4acd-bde5-9dd2266fc007 -c cachen1p0 --l2p_dram_limit 2
00:30:33.471  [2024-12-09 17:19:56.416240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.471  [2024-12-09 17:19:56.416281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:30:33.471  [2024-12-09 17:19:56.416295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:30:33.471  [2024-12-09 17:19:56.416302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.471  [2024-12-09 17:19:56.416352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.471  [2024-12-09 17:19:56.416367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:30:33.471  [2024-12-09 17:19:56.416376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.036 ms
00:30:33.471  [2024-12-09 17:19:56.416382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.471  [2024-12-09 17:19:56.416400] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:30:33.471  [2024-12-09 17:19:56.416957] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:30:33.471  [2024-12-09 17:19:56.416975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.471  [2024-12-09 17:19:56.416982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:30:33.471  [2024-12-09 17:19:56.416993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.577 ms
00:30:33.471  [2024-12-09 17:19:56.416999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.471  [2024-12-09 17:19:56.417025] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b9c618aa-b25b-4f88-a58e-97446d043b7d
00:30:33.471  [2024-12-09 17:19:56.418364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.471  [2024-12-09 17:19:56.418392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Default-initialize superblock
00:30:33.472  [2024-12-09 17:19:56.418401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.025 ms
00:30:33.472  [2024-12-09 17:19:56.418410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.425397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.425425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:30:33.472  [2024-12-09 17:19:56.425434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 6.943 ms
00:30:33.472  [2024-12-09 17:19:56.425441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.425510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.425520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:30:33.472  [2024-12-09 17:19:56.425526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.019 ms
00:30:33.472  [2024-12-09 17:19:56.425536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.425573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.425584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:30:33.472  [2024-12-09 17:19:56.425593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:30:33.472  [2024-12-09 17:19:56.425602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.425620] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:30:33.472  [2024-12-09 17:19:56.428942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.428966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:30:33.472  [2024-12-09 17:19:56.428976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3.324 ms
00:30:33.472  [2024-12-09 17:19:56.428983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.429008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.429014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:30:33.472  [2024-12-09 17:19:56.429022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:30:33.472  [2024-12-09 17:19:56.429028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.429049] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1
00:30:33.472  [2024-12-09 17:19:56.429172] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:30:33.472  [2024-12-09 17:19:56.429186] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:30:33.472  [2024-12-09 17:19:56.429195] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:30:33.472  [2024-12-09 17:19:56.429205] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429212] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429221] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:30:33.472  [2024-12-09 17:19:56.429227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:30:33.472  [2024-12-09 17:19:56.429236] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:30:33.472  [2024-12-09 17:19:56.429242] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:30:33.472  [2024-12-09 17:19:56.429250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.429255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:30:33.472  [2024-12-09 17:19:56.429263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.202 ms
00:30:33.472  [2024-12-09 17:19:56.429269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.429336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.472  [2024-12-09 17:19:56.429348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:30:33.472  [2024-12-09 17:19:56.429355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.054 ms
00:30:33.472  [2024-12-09 17:19:56.429360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.472  [2024-12-09 17:19:56.429443] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:30:33.472  [2024-12-09 17:19:56.429452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:30:33.472  [2024-12-09 17:19:56.429460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:30:33.472  [2024-12-09 17:19:56.429479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:30:33.472  [2024-12-09 17:19:56.429493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:30:33.472  [2024-12-09 17:19:56.429500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:30:33.472  [2024-12-09 17:19:56.429506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:30:33.472  [2024-12-09 17:19:56.429519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:30:33.472  [2024-12-09 17:19:56.429525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:30:33.472  [2024-12-09 17:19:56.429538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:30:33.472  [2024-12-09 17:19:56.429542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:30:33.472  [2024-12-09 17:19:56.429556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:30:33.472  [2024-12-09 17:19:56.429563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:30:33.472  [2024-12-09 17:19:56.429574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:30:33.472  [2024-12-09 17:19:56.429579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:30:33.472  [2024-12-09 17:19:56.429590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:30:33.472  [2024-12-09 17:19:56.429597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:30:33.472  [2024-12-09 17:19:56.429609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:30:33.472  [2024-12-09 17:19:56.429614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:30:33.472  [2024-12-09 17:19:56.429625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:30:33.472  [2024-12-09 17:19:56.429632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:30:33.472  [2024-12-09 17:19:56.429647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:30:33.472  [2024-12-09 17:19:56.429652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:30:33.472  [2024-12-09 17:19:56.429664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:30:33.472  [2024-12-09 17:19:56.429684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:30:33.472  [2024-12-09 17:19:56.429702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:30:33.472  [2024-12-09 17:19:56.429709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:30:33.472  [2024-12-09 17:19:56.429722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:30:33.472  [2024-12-09 17:19:56.429728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:33.472  [2024-12-09 17:19:56.429742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:30:33.472  [2024-12-09 17:19:56.429751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:30:33.472  [2024-12-09 17:19:56.429770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:30:33.472  [2024-12-09 17:19:56.429777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:30:33.472  [2024-12-09 17:19:56.429783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:30:33.472  [2024-12-09 17:19:56.429790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:30:33.472  [2024-12-09 17:19:56.429797] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:30:33.472  [2024-12-09 17:19:56.429809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:33.472  [2024-12-09 17:19:56.429815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:30:33.472  [2024-12-09 17:19:56.429823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:30:33.472  [2024-12-09 17:19:56.429828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:30:33.472  [2024-12-09 17:19:56.429835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:30:33.472  [2024-12-09 17:19:56.429841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:30:33.472  [2024-12-09 17:19:56.429859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:30:33.472  [2024-12-09 17:19:56.429865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:30:33.472  [2024-12-09 17:19:56.429873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:30:33.472  [2024-12-09 17:19:56.429879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:30:33.473  [2024-12-09 17:19:56.429919] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:30:33.473  [2024-12-09 17:19:56.429927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:30:33.473  [2024-12-09 17:19:56.429943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:30:33.473  [2024-12-09 17:19:56.429949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:30:33.473  [2024-12-09 17:19:56.429956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:30:33.473  [2024-12-09 17:19:56.429962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:33.473  [2024-12-09 17:19:56.429972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:30:33.473  [2024-12-09 17:19:56.429978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.576 ms
00:30:33.473  [2024-12-09 17:19:56.429986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:33.473  [2024-12-09 17:19:56.430030] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while.
00:30:33.473  [2024-12-09 17:19:56.430042] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks
00:30:37.672  [2024-12-09 17:20:00.096652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.096723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Scrub NV cache
00:30:37.672  [2024-12-09 17:20:00.096737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3666.607 ms
00:30:37.672  [2024-12-09 17:20:00.096746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.120541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.120605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:30:37.672  [2024-12-09 17:20:00.120617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 23.606 ms
00:30:37.672  [2024-12-09 17:20:00.120626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.120696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.120707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:30:37.672  [2024-12-09 17:20:00.120719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.014 ms
00:30:37.672  [2024-12-09 17:20:00.120731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.147428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.147459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:30:37.672  [2024-12-09 17:20:00.147469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 26.668 ms
00:30:37.672  [2024-12-09 17:20:00.147478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.147507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.147518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:30:37.672  [2024-12-09 17:20:00.147525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:30:37.672  [2024-12-09 17:20:00.147533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.147958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.147981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:30:37.672  [2024-12-09 17:20:00.147995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.386 ms
00:30:37.672  [2024-12-09 17:20:00.148004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.148040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.148049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:30:37.672  [2024-12-09 17:20:00.148057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.019 ms
00:30:37.672  [2024-12-09 17:20:00.148067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.161045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.161071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:30:37.672  [2024-12-09 17:20:00.161079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.963 ms
00:30:37.672  [2024-12-09 17:20:00.161087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.181573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:30:37.672  [2024-12-09 17:20:00.182721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.182751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:30:37.672  [2024-12-09 17:20:00.182767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 21.554 ms
00:30:37.672  [2024-12-09 17:20:00.182778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.208206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.208236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear L2P
00:30:37.672  [2024-12-09 17:20:00.208248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 25.386 ms
00:30:37.672  [2024-12-09 17:20:00.208255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.672  [2024-12-09 17:20:00.208329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.672  [2024-12-09 17:20:00.208340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:30:37.673  [2024-12-09 17:20:00.208352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.042 ms
00:30:37.673  [2024-12-09 17:20:00.208374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.226423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.226447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Save initial band info metadata
00:30:37.673  [2024-12-09 17:20:00.226459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 18.008 ms
00:30:37.673  [2024-12-09 17:20:00.226466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.243734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.243756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Save initial chunk info metadata
00:30:37.673  [2024-12-09 17:20:00.243766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.235 ms
00:30:37.673  [2024-12-09 17:20:00.243772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.244234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.244249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:30:37.673  [2024-12-09 17:20:00.244258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.434 ms
00:30:37.673  [2024-12-09 17:20:00.244267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.307738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.307764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Wipe P2L region
00:30:37.673  [2024-12-09 17:20:00.307777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 63.444 ms
00:30:37.673  [2024-12-09 17:20:00.307785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.326719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.326743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear trim map
00:30:37.673  [2024-12-09 17:20:00.326754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 18.877 ms
00:30:37.673  [2024-12-09 17:20:00.326761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.344718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.344739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear trim log
00:30:37.673  [2024-12-09 17:20:00.344749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.926 ms
00:30:37.673  [2024-12-09 17:20:00.344755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.362363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.362386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL dirty state
00:30:37.673  [2024-12-09 17:20:00.362396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.578 ms
00:30:37.673  [2024-12-09 17:20:00.362402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.362435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.362443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:30:37.673  [2024-12-09 17:20:00.362454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:30:37.673  [2024-12-09 17:20:00.362460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.362543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:37.673  [2024-12-09 17:20:00.362553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:30:37.673  [2024-12-09 17:20:00.362561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.029 ms
00:30:37.673  [2024-12-09 17:20:00.362567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:37.673  [2024-12-09 17:20:00.363661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3947.040 ms, result 0
00:30:37.673  {
00:30:37.673    "name": "ftl",
00:30:37.673    "uuid": "b9c618aa-b25b-4f88-a58e-97446d043b7d"
00:30:37.673  }
00:30:37.673   17:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP
00:30:37.673  [2024-12-09 17:20:00.574760] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:30:37.673   17:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1
00:30:37.934   17:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl
00:30:38.194  [2024-12-09 17:20:00.979123] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:30:38.194   17:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1
00:30:38.194  [2024-12-09 17:20:01.175396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:30:38.194   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=()
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 ))
00:30:38.765  Fill FTL, iteration 1
00:30:38.765  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1'
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]]
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84989
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84989 /var/tmp/spdk.tgt.sock
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84989 ']'
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...'
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:38.765   17:20:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:30:38.765  [2024-12-09 17:20:01.594617] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:30:38.765  [2024-12-09 17:20:01.594744] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84989 ]
00:30:38.765  [2024-12-09 17:20:01.755510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:39.026  [2024-12-09 17:20:01.856490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:39.595   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:39.595   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:30:39.595   17:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0
00:30:39.855  ftln1
00:30:39.855   17:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": ['
00:30:39.855   17:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}'
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84989
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84989 ']'
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84989
00:30:40.117    17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:40.117    17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84989
00:30:40.117  killing process with pid 84989
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84989'
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84989
00:30:40.117   17:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84989
00:30:41.504   17:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid
00:30:41.504   17:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0
00:30:41.504  [2024-12-09 17:20:04.448515] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:30:41.504  [2024-12-09 17:20:04.448787] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85032 ]
00:30:41.765  [2024-12-09 17:20:04.610640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:41.766  [2024-12-09 17:20:04.708692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:43.225  
[2024-12-09T17:20:07.211Z] Copying: 198/1024 [MB] (198 MBps)
[2024-12-09T17:20:08.156Z] Copying: 396/1024 [MB] (198 MBps)
[2024-12-09T17:20:09.099Z] Copying: 603/1024 [MB] (207 MBps)
[2024-12-09T17:20:10.487Z] Copying: 804/1024 [MB] (201 MBps)
[2024-12-09T17:20:10.487Z] Copying: 992/1024 [MB] (188 MBps)
[2024-12-09T17:20:11.060Z] Copying: 1024/1024 [MB] (average 197 MBps)
00:30:48.019  
00:30:48.019  Calculate MD5 checksum, iteration 1
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1'
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:30:48.019   17:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:30:48.280  [2024-12-09 17:20:11.060612] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:30:48.280  [2024-12-09 17:20:11.060718] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85102 ]
00:30:48.280  [2024-12-09 17:20:11.220076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:48.542  [2024-12-09 17:20:11.319831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:49.930  
[2024-12-09T17:20:13.544Z] Copying: 645/1024 [MB] (645 MBps)
[2024-12-09T17:20:14.116Z] Copying: 1024/1024 [MB] (average 646 MBps)
00:30:51.075  
00:30:51.075   17:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024
00:30:51.075   17:20:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:30:52.987    17:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d '
00:30:53.247  Fill FTL, iteration 2
00:30:53.247   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dea1066b344617e97337020abd7f5e33
00:30:53.247   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ ))
00:30:53.247   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:30:53.247   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2'
00:30:53.247   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024
00:30:53.247   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:30:53.248   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:30:53.248   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:30:53.248   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:30:53.248   17:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024
00:30:53.248  [2024-12-09 17:20:16.091195] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:30:53.248  [2024-12-09 17:20:16.091317] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85162 ]
00:30:53.248  [2024-12-09 17:20:16.250267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:53.508  [2024-12-09 17:20:16.352062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:30:54.895  
[2024-12-09T17:20:18.880Z] Copying: 206/1024 [MB] (206 MBps)
[2024-12-09T17:20:19.822Z] Copying: 446/1024 [MB] (240 MBps)
[2024-12-09T17:20:20.790Z] Copying: 694/1024 [MB] (248 MBps)
[2024-12-09T17:20:21.363Z] Copying: 907/1024 [MB] (213 MBps)
[2024-12-09T17:20:21.936Z] Copying: 1024/1024 [MB] (average 225 MBps)
00:30:58.895  
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048
00:30:58.895  Calculate MD5 checksum, iteration 2
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2'
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:30:58.895   17:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:30:59.157  [2024-12-09 17:20:21.977108] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:30:59.157  [2024-12-09 17:20:21.977226] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85216 ]
00:30:59.157  [2024-12-09 17:20:22.132320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:59.419  [2024-12-09 17:20:22.229335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:00.846  
[2024-12-09T17:20:24.459Z] Copying: 692/1024 [MB] (692 MBps)
[2024-12-09T17:20:25.401Z] Copying: 1024/1024 [MB] (average 661 MBps)
00:31:02.360  
00:31:02.360   17:20:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048
00:31:02.360   17:20:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:04.276    17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d '
00:31:04.276   17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fd290a1bc49d464960ccc4901c62a5b0
00:31:04.276   17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ ))
00:31:04.276   17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:31:04.276   17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:31:04.537  [2024-12-09 17:20:27.462625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:04.537  [2024-12-09 17:20:27.462672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:04.537  [2024-12-09 17:20:27.462685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.008 ms
00:31:04.537  [2024-12-09 17:20:27.462692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:04.537  [2024-12-09 17:20:27.462713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:04.537  [2024-12-09 17:20:27.462724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:04.537  [2024-12-09 17:20:27.462731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:04.537  [2024-12-09 17:20:27.462737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:04.537  [2024-12-09 17:20:27.462753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:04.537  [2024-12-09 17:20:27.462760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:04.537  [2024-12-09 17:20:27.462768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:31:04.537  [2024-12-09 17:20:27.462774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:04.537  [2024-12-09 17:20:27.462831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.194 ms, result 0
00:31:04.537  true
00:31:04.537   17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:04.798  {
00:31:04.798    "name": "ftl",
00:31:04.798    "properties": [
00:31:04.798      {
00:31:04.798        "name": "superblock_version",
00:31:04.799        "value": 5,
00:31:04.799        "read-only": true
00:31:04.799      },
00:31:04.799      {
00:31:04.799        "name": "base_device",
00:31:04.799        "bands": [
00:31:04.799          {
00:31:04.799            "id": 0,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 1,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 2,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 3,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 4,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 5,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 6,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 7,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 8,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 9,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 10,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 11,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 12,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 13,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 14,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 15,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 16,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 17,
00:31:04.799            "state": "FREE",
00:31:04.799            "validity": 0.0
00:31:04.799          }
00:31:04.799        ],
00:31:04.799        "read-only": true
00:31:04.799      },
00:31:04.799      {
00:31:04.799        "name": "cache_device",
00:31:04.799        "type": "bdev",
00:31:04.799        "chunks": [
00:31:04.799          {
00:31:04.799            "id": 0,
00:31:04.799            "state": "INACTIVE",
00:31:04.799            "utilization": 0.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 1,
00:31:04.799            "state": "CLOSED",
00:31:04.799            "utilization": 1.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 2,
00:31:04.799            "state": "CLOSED",
00:31:04.799            "utilization": 1.0
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 3,
00:31:04.799            "state": "OPEN",
00:31:04.799            "utilization": 0.001953125
00:31:04.799          },
00:31:04.799          {
00:31:04.799            "id": 4,
00:31:04.799            "state": "OPEN",
00:31:04.799            "utilization": 0.0
00:31:04.799          }
00:31:04.799        ],
00:31:04.799        "read-only": true
00:31:04.799      },
00:31:04.799      {
00:31:04.799        "name": "verbose_mode",
00:31:04.799        "value": true,
00:31:04.799        "unit": "",
00:31:04.799        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:31:04.799      },
00:31:04.799      {
00:31:04.799        "name": "prep_upgrade_on_shutdown",
00:31:04.799        "value": false,
00:31:04.799        "unit": "",
00:31:04.799        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:31:04.799      }
00:31:04.799    ]
00:31:04.799  }
00:31:04.799   17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true
00:31:05.060  [2024-12-09 17:20:27.870992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:05.060  [2024-12-09 17:20:27.871032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:05.060  [2024-12-09 17:20:27.871043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.007 ms
00:31:05.060  [2024-12-09 17:20:27.871049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:05.060  [2024-12-09 17:20:27.871066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:05.060  [2024-12-09 17:20:27.871073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:05.060  [2024-12-09 17:20:27.871080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:05.060  [2024-12-09 17:20:27.871086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:05.060  [2024-12-09 17:20:27.871101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:05.060  [2024-12-09 17:20:27.871108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:05.060  [2024-12-09 17:20:27.871114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:31:05.060  [2024-12-09 17:20:27.871120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:05.060  [2024-12-09 17:20:27.871178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.168 ms, result 0
00:31:05.060  true
00:31:05.060    17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties
00:31:05.060    17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:05.060    17:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length'
00:31:05.320   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3
00:31:05.320   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]]
00:31:05.320   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:31:05.320  [2024-12-09 17:20:28.247318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:05.320  [2024-12-09 17:20:28.247359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:05.320  [2024-12-09 17:20:28.247370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:31:05.320  [2024-12-09 17:20:28.247376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:05.320  [2024-12-09 17:20:28.247393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:05.320  [2024-12-09 17:20:28.247400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:05.320  [2024-12-09 17:20:28.247407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:31:05.320  [2024-12-09 17:20:28.247413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:05.320  [2024-12-09 17:20:28.247428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:05.320  [2024-12-09 17:20:28.247435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:05.320  [2024-12-09 17:20:28.247441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:31:05.320  [2024-12-09 17:20:28.247446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:05.320  [2024-12-09 17:20:28.247495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.170 ms, result 0
00:31:05.320  true
00:31:05.320   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:05.581  {
00:31:05.581    "name": "ftl",
00:31:05.581    "properties": [
00:31:05.581      {
00:31:05.581        "name": "superblock_version",
00:31:05.581        "value": 5,
00:31:05.581        "read-only": true
00:31:05.581      },
00:31:05.581      {
00:31:05.581        "name": "base_device",
00:31:05.581        "bands": [
00:31:05.581          {
00:31:05.581            "id": 0,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 1,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 2,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 3,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 4,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 5,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 6,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 7,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 8,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 9,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 10,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 11,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 12,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 13,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 14,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 15,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 16,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 17,
00:31:05.581            "state": "FREE",
00:31:05.581            "validity": 0.0
00:31:05.581          }
00:31:05.581        ],
00:31:05.581        "read-only": true
00:31:05.581      },
00:31:05.581      {
00:31:05.581        "name": "cache_device",
00:31:05.581        "type": "bdev",
00:31:05.581        "chunks": [
00:31:05.581          {
00:31:05.581            "id": 0,
00:31:05.581            "state": "INACTIVE",
00:31:05.581            "utilization": 0.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 1,
00:31:05.581            "state": "CLOSED",
00:31:05.581            "utilization": 1.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 2,
00:31:05.581            "state": "CLOSED",
00:31:05.581            "utilization": 1.0
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 3,
00:31:05.581            "state": "OPEN",
00:31:05.581            "utilization": 0.001953125
00:31:05.581          },
00:31:05.581          {
00:31:05.581            "id": 4,
00:31:05.581            "state": "OPEN",
00:31:05.581            "utilization": 0.0
00:31:05.581          }
00:31:05.581        ],
00:31:05.581        "read-only": true
00:31:05.581      },
00:31:05.581      {
00:31:05.581        "name": "verbose_mode",
00:31:05.581        "value": true,
00:31:05.581        "unit": "",
00:31:05.581        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:31:05.581      },
00:31:05.581      {
00:31:05.581        "name": "prep_upgrade_on_shutdown",
00:31:05.581        "value": true,
00:31:05.581        "unit": "",
00:31:05.581        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:31:05.581      }
00:31:05.581    ]
00:31:05.581  }
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84867 ]]
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84867
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84867 ']'
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84867
00:31:05.581    17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:05.581    17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84867
00:31:05.581  killing process with pid 84867
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84867'
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84867
00:31:05.581   17:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84867
00:31:06.151  [2024-12-09 17:20:29.042429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000
00:31:06.151  [2024-12-09 17:20:29.053258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:06.152  [2024-12-09 17:20:29.053298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinit core IO channel
00:31:06.152  [2024-12-09 17:20:29.053311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:06.152  [2024-12-09 17:20:29.053318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:06.152  [2024-12-09 17:20:29.053337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread
00:31:06.152  [2024-12-09 17:20:29.055535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:06.152  [2024-12-09 17:20:29.055563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Unregister IO device
00:31:06.152  [2024-12-09 17:20:29.055572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2.186 ms
00:31:06.152  [2024-12-09 17:20:29.055583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.077728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.077778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Stop core poller
00:31:16.163  [2024-12-09 17:20:38.077796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 9022.091 ms
00:31:16.163  [2024-12-09 17:20:38.077804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.078869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.078888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist L2P
00:31:16.163  [2024-12-09 17:20:38.078897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.051 ms
00:31:16.163  [2024-12-09 17:20:38.078904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.079753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.079772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finish L2P trims
00:31:16.163  [2024-12-09 17:20:38.079781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.828 ms
00:31:16.163  [2024-12-09 17:20:38.079793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.087741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.087772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist NV cache metadata
00:31:16.163  [2024-12-09 17:20:38.087781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.919 ms
00:31:16.163  [2024-12-09 17:20:38.087787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.093529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.093556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist valid map metadata
00:31:16.163  [2024-12-09 17:20:38.093566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 5.717 ms
00:31:16.163  [2024-12-09 17:20:38.093573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.093628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.093640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist P2L metadata
00:31:16.163  [2024-12-09 17:20:38.093647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.029 ms
00:31:16.163  [2024-12-09 17:20:38.093653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.100955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.100980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist band info metadata
00:31:16.163  [2024-12-09 17:20:38.100988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.290 ms
00:31:16.163  [2024-12-09 17:20:38.100993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.108229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.108253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist trim metadata
00:31:16.163  [2024-12-09 17:20:38.108260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.212 ms
00:31:16.163  [2024-12-09 17:20:38.108266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.115446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.115471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist superblock
00:31:16.163  [2024-12-09 17:20:38.115478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.157 ms
00:31:16.163  [2024-12-09 17:20:38.115484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.122550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.163  [2024-12-09 17:20:38.122575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL clean state
00:31:16.163  [2024-12-09 17:20:38.122582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.011 ms
00:31:16.163  [2024-12-09 17:20:38.122588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.163  [2024-12-09 17:20:38.122612] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity:
00:31:16.164  [2024-12-09 17:20:38.122631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:16.164  [2024-12-09 17:20:38.122640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   2:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:16.164  [2024-12-09 17:20:38.122647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   3:     2048 / 261120 	wr_cnt: 1	state: closed
00:31:16.164  [2024-12-09 17:20:38.122654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:31:16.164  [2024-12-09 17:20:38.122744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 
00:31:16.164  [2024-12-09 17:20:38.122751] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID:         b9c618aa-b25b-4f88-a58e-97446d043b7d
00:31:16.164  [2024-12-09 17:20:38.122758] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs:    524288
00:31:16.164  [2024-12-09 17:20:38.122763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes:        786752
00:31:16.164  [2024-12-09 17:20:38.122769] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes:         524288
00:31:16.164  [2024-12-09 17:20:38.122776] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF:                 1.5006
00:31:16.164  [2024-12-09 17:20:38.122784] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits:
00:31:16.164  [2024-12-09 17:20:38.122790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   crit: 0
00:31:16.164  [2024-12-09 17:20:38.122799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   high: 0
00:31:16.164  [2024-12-09 17:20:38.122804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]    low: 0
00:31:16.164  [2024-12-09 17:20:38.122810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]  start: 0
00:31:16.164  [2024-12-09 17:20:38.122816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.164  [2024-12-09 17:20:38.122823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Dump statistics
00:31:16.164  [2024-12-09 17:20:38.122829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.205 ms
00:31:16.164  [2024-12-09 17:20:38.122835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.132922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.164  [2024-12-09 17:20:38.132948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize L2P
00:31:16.164  [2024-12-09 17:20:38.132960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 10.065 ms
00:31:16.164  [2024-12-09 17:20:38.132967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.133257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:16.164  [2024-12-09 17:20:38.133271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize P2L checkpointing
00:31:16.164  [2024-12-09 17:20:38.133278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.276 ms
00:31:16.164  [2024-12-09 17:20:38.133284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.167656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.167690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:16.164  [2024-12-09 17:20:38.167698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.167705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.167732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.167738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:16.164  [2024-12-09 17:20:38.167745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.167751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.167804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.167813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:16.164  [2024-12-09 17:20:38.167822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.167829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.167841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.167866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:16.164  [2024-12-09 17:20:38.167873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.167880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.230124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.230162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:16.164  [2024-12-09 17:20:38.230176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.230183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.280728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.280765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:16.164  [2024-12-09 17:20:38.280776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.280783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.280876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.280885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:16.164  [2024-12-09 17:20:38.280893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.280904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.280939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.280947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:16.164  [2024-12-09 17:20:38.280954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.280961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.281036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.281054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:16.164  [2024-12-09 17:20:38.281061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.281068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.281097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.281104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize superblock
00:31:16.164  [2024-12-09 17:20:38.281111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.281118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.281155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.281162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:16.164  [2024-12-09 17:20:38.281168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.281176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.281220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:16.164  [2024-12-09 17:20:38.281228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:16.164  [2024-12-09 17:20:38.281234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:16.164  [2024-12-09 17:20:38.281241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:16.164  [2024-12-09 17:20:38.281353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9228.035 ms, result 0
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85434
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85434
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85434 ']'
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:17.552  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:17.552   17:20:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:31:17.552  [2024-12-09 17:20:40.580968] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:17.552  [2024-12-09 17:20:40.581431] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85434 ]
00:31:17.814  [2024-12-09 17:20:40.737830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:17.814  [2024-12-09 17:20:40.831042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:18.759  [2024-12-09 17:20:41.456506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:31:18.759  [2024-12-09 17:20:41.456570] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:31:18.759  [2024-12-09 17:20:41.605320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.605358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:31:18.759  [2024-12-09 17:20:41.605369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:18.759  [2024-12-09 17:20:41.605376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.605422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.605431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:18.759  [2024-12-09 17:20:41.605437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.031 ms
00:31:18.759  [2024-12-09 17:20:41.605444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.605462] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:31:18.759  [2024-12-09 17:20:41.606186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:31:18.759  [2024-12-09 17:20:41.606219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.606227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:18.759  [2024-12-09 17:20:41.606235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.764 ms
00:31:18.759  [2024-12-09 17:20:41.606241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.607594] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0
00:31:18.759  [2024-12-09 17:20:41.617727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.617754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Load super block
00:31:18.759  [2024-12-09 17:20:41.617767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 10.134 ms
00:31:18.759  [2024-12-09 17:20:41.617774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.617879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.617899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Validate super block
00:31:18.759  [2024-12-09 17:20:41.617906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.020 ms
00:31:18.759  [2024-12-09 17:20:41.617912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.624097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.624122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:18.759  [2024-12-09 17:20:41.624130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 6.139 ms
00:31:18.759  [2024-12-09 17:20:41.624136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.624183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.624190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:18.759  [2024-12-09 17:20:41.624197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.030 ms
00:31:18.759  [2024-12-09 17:20:41.624203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.624248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.624259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:31:18.759  [2024-12-09 17:20:41.624266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.007 ms
00:31:18.759  [2024-12-09 17:20:41.624273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.624290] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:31:18.759  [2024-12-09 17:20:41.627299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.627322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:18.759  [2024-12-09 17:20:41.627330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3.014 ms
00:31:18.759  [2024-12-09 17:20:41.627339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.627365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.759  [2024-12-09 17:20:41.627372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:31:18.759  [2024-12-09 17:20:41.627378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:18.759  [2024-12-09 17:20:41.627384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.759  [2024-12-09 17:20:41.627400] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0
00:31:18.759  [2024-12-09 17:20:41.627420] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes
00:31:18.759  [2024-12-09 17:20:41.627448] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes
00:31:18.759  [2024-12-09 17:20:41.627460] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes
00:31:18.760  [2024-12-09 17:20:41.627542] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:31:18.760  [2024-12-09 17:20:41.627550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:31:18.760  [2024-12-09 17:20:41.627559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:31:18.760  [2024-12-09 17:20:41.627567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:31:18.760  [2024-12-09 17:20:41.627588] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:31:18.760  [2024-12-09 17:20:41.627594] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:31:18.760  [2024-12-09 17:20:41.627601] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:31:18.760  [2024-12-09 17:20:41.627607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.760  [2024-12-09 17:20:41.627612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:31:18.760  [2024-12-09 17:20:41.627618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.208 ms
00:31:18.760  [2024-12-09 17:20:41.627624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.760  [2024-12-09 17:20:41.627689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.760  [2024-12-09 17:20:41.627696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:31:18.760  [2024-12-09 17:20:41.627704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.053 ms
00:31:18.760  [2024-12-09 17:20:41.627709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.760  [2024-12-09 17:20:41.627785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:31:18.760  [2024-12-09 17:20:41.627793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:31:18.760  [2024-12-09 17:20:41.627799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:31:18.760  [2024-12-09 17:20:41.627816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:31:18.760  [2024-12-09 17:20:41.627828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:31:18.760  [2024-12-09 17:20:41.627833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:31:18.760  [2024-12-09 17:20:41.627838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:31:18.760  [2024-12-09 17:20:41.627863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:31:18.760  [2024-12-09 17:20:41.627869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:31:18.760  [2024-12-09 17:20:41.627880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:31:18.760  [2024-12-09 17:20:41.627885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:31:18.760  [2024-12-09 17:20:41.627896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:31:18.760  [2024-12-09 17:20:41.627901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:31:18.760  [2024-12-09 17:20:41.627912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:31:18.760  [2024-12-09 17:20:41.627918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:31:18.760  [2024-12-09 17:20:41.627933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:31:18.760  [2024-12-09 17:20:41.627938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:31:18.760  [2024-12-09 17:20:41.627949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:31:18.760  [2024-12-09 17:20:41.627954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:31:18.760  [2024-12-09 17:20:41.627966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:31:18.760  [2024-12-09 17:20:41.627971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:18.760  [2024-12-09 17:20:41.627976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:31:18.760  [2024-12-09 17:20:41.627981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:31:18.760  [2024-12-09 17:20:41.627986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.627991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:31:18.760  [2024-12-09 17:20:41.627996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:31:18.760  [2024-12-09 17:20:41.628001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.628006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:31:18.760  [2024-12-09 17:20:41.628012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:31:18.760  [2024-12-09 17:20:41.628017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.628022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:31:18.760  [2024-12-09 17:20:41.628027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:31:18.760  [2024-12-09 17:20:41.628032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.628040] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:31:18.760  [2024-12-09 17:20:41.628046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:31:18.760  [2024-12-09 17:20:41.628053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:18.760  [2024-12-09 17:20:41.628059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:18.760  [2024-12-09 17:20:41.628067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:31:18.760  [2024-12-09 17:20:41.628072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:31:18.760  [2024-12-09 17:20:41.628077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:31:18.760  [2024-12-09 17:20:41.628082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:31:18.760  [2024-12-09 17:20:41.628087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:31:18.760  [2024-12-09 17:20:41.628092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:31:18.760  [2024-12-09 17:20:41.628099] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:31:18.760  [2024-12-09 17:20:41.628107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:31:18.760  [2024-12-09 17:20:41.628120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:31:18.760  [2024-12-09 17:20:41.628136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:31:18.760  [2024-12-09 17:20:41.628141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:31:18.760  [2024-12-09 17:20:41.628147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:31:18.760  [2024-12-09 17:20:41.628153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:31:18.760  [2024-12-09 17:20:41.628190] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:31:18.760  [2024-12-09 17:20:41.628196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:31:18.760  [2024-12-09 17:20:41.628207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:31:18.760  [2024-12-09 17:20:41.628212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:31:18.760  [2024-12-09 17:20:41.628218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:31:18.760  [2024-12-09 17:20:41.628227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:18.760  [2024-12-09 17:20:41.628233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:31:18.760  [2024-12-09 17:20:41.628239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.495 ms
00:31:18.760  [2024-12-09 17:20:41.628245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:18.760  [2024-12-09 17:20:41.628286] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while.
00:31:18.760  [2024-12-09 17:20:41.628295] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks
00:31:24.052  [2024-12-09 17:20:46.071377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.071431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Scrub NV cache
00:31:24.052  [2024-12-09 17:20:46.071445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4443.073 ms
00:31:24.052  [2024-12-09 17:20:46.071453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.094675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.094718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:24.052  [2024-12-09 17:20:46.094730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 23.044 ms
00:31:24.052  [2024-12-09 17:20:46.094737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.094806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.094818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:31:24.052  [2024-12-09 17:20:46.094825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.013 ms
00:31:24.052  [2024-12-09 17:20:46.094831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.121574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.121608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:24.052  [2024-12-09 17:20:46.121621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 26.701 ms
00:31:24.052  [2024-12-09 17:20:46.121629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.121656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.121663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:24.052  [2024-12-09 17:20:46.121671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:24.052  [2024-12-09 17:20:46.121678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.122123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.122147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:24.052  [2024-12-09 17:20:46.122156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.387 ms
00:31:24.052  [2024-12-09 17:20:46.122163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.122205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.122217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:24.052  [2024-12-09 17:20:46.122224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.018 ms
00:31:24.052  [2024-12-09 17:20:46.122230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.135572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.135602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:24.052  [2024-12-09 17:20:46.135610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 13.324 ms
00:31:24.052  [2024-12-09 17:20:46.135617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.159390] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4
00:31:24.052  [2024-12-09 17:20:46.159425] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully
00:31:24.052  [2024-12-09 17:20:46.159437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.159445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore NV cache metadata
00:31:24.052  [2024-12-09 17:20:46.159453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 23.738 ms
00:31:24.052  [2024-12-09 17:20:46.159459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.170985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.171015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore valid map metadata
00:31:24.052  [2024-12-09 17:20:46.171025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.491 ms
00:31:24.052  [2024-12-09 17:20:46.171033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.179601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.179628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore band info metadata
00:31:24.052  [2024-12-09 17:20:46.179636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 8.535 ms
00:31:24.052  [2024-12-09 17:20:46.179642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.188236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.188261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore trim metadata
00:31:24.052  [2024-12-09 17:20:46.188269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 8.566 ms
00:31:24.052  [2024-12-09 17:20:46.188275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.188760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.188782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:31:24.052  [2024-12-09 17:20:46.188790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.419 ms
00:31:24.052  [2024-12-09 17:20:46.188797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.235980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.236019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore P2L checkpoints
00:31:24.052  [2024-12-09 17:20:46.236031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 47.166 ms
00:31:24.052  [2024-12-09 17:20:46.236039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.244336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:31:24.052  [2024-12-09 17:20:46.245009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.245033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:31:24.052  [2024-12-09 17:20:46.245044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 8.934 ms
00:31:24.052  [2024-12-09 17:20:46.245051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.245130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.245142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore L2P
00:31:24.052  [2024-12-09 17:20:46.245150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.012 ms
00:31:24.052  [2024-12-09 17:20:46.245156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.245196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.245206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:31:24.052  [2024-12-09 17:20:46.245212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.017 ms
00:31:24.052  [2024-12-09 17:20:46.245220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.245238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.245245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:31:24.052  [2024-12-09 17:20:46.245255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:31:24.052  [2024-12-09 17:20:46.245262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.245292] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped
00:31:24.052  [2024-12-09 17:20:46.245301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.245308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Self test on startup
00:31:24.052  [2024-12-09 17:20:46.245315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.010 ms
00:31:24.052  [2024-12-09 17:20:46.245321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.262869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.262901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL dirty state
00:31:24.052  [2024-12-09 17:20:46.262910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.533 ms
00:31:24.052  [2024-12-09 17:20:46.262916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.262973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.052  [2024-12-09 17:20:46.262983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:31:24.052  [2024-12-09 17:20:46.262989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.028 ms
00:31:24.052  [2024-12-09 17:20:46.262996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.052  [2024-12-09 17:20:46.263971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4658.235 ms, result 0
00:31:24.052  [2024-12-09 17:20:46.279162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:24.052  [2024-12-09 17:20:46.295163] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:31:24.052  [2024-12-09 17:20:46.303295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:31:24.052   17:20:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:24.052   17:20:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:31:24.053   17:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:24.053   17:20:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0
00:31:24.053   17:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:31:24.053  [2024-12-09 17:20:46.739427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.053  [2024-12-09 17:20:46.739461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:24.053  [2024-12-09 17:20:46.739475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:31:24.053  [2024-12-09 17:20:46.739481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.053  [2024-12-09 17:20:46.739499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.053  [2024-12-09 17:20:46.739506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:24.053  [2024-12-09 17:20:46.739514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:24.053  [2024-12-09 17:20:46.739520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.053  [2024-12-09 17:20:46.739535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:24.053  [2024-12-09 17:20:46.739542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:24.053  [2024-12-09 17:20:46.739549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:31:24.053  [2024-12-09 17:20:46.739555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:24.053  [2024-12-09 17:20:46.739600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0
00:31:24.053  true
00:31:24.053   17:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:24.053  {
00:31:24.053    "name": "ftl",
00:31:24.053    "properties": [
00:31:24.053      {
00:31:24.053        "name": "superblock_version",
00:31:24.053        "value": 5,
00:31:24.053        "read-only": true
00:31:24.053      },
00:31:24.053      {
00:31:24.053        "name": "base_device",
00:31:24.053        "bands": [
00:31:24.053          {
00:31:24.053            "id": 0,
00:31:24.053            "state": "CLOSED",
00:31:24.053            "validity": 1.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 1,
00:31:24.053            "state": "CLOSED",
00:31:24.053            "validity": 1.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 2,
00:31:24.053            "state": "CLOSED",
00:31:24.053            "validity": 0.007843137254901933
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 3,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 4,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 5,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 6,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 7,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 8,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 9,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 10,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 11,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 12,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 13,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 14,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 15,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 16,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 17,
00:31:24.053            "state": "FREE",
00:31:24.053            "validity": 0.0
00:31:24.053          }
00:31:24.053        ],
00:31:24.053        "read-only": true
00:31:24.053      },
00:31:24.053      {
00:31:24.053        "name": "cache_device",
00:31:24.053        "type": "bdev",
00:31:24.053        "chunks": [
00:31:24.053          {
00:31:24.053            "id": 0,
00:31:24.053            "state": "INACTIVE",
00:31:24.053            "utilization": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 1,
00:31:24.053            "state": "OPEN",
00:31:24.053            "utilization": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 2,
00:31:24.053            "state": "OPEN",
00:31:24.053            "utilization": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 3,
00:31:24.053            "state": "FREE",
00:31:24.053            "utilization": 0.0
00:31:24.053          },
00:31:24.053          {
00:31:24.053            "id": 4,
00:31:24.053            "state": "FREE",
00:31:24.053            "utilization": 0.0
00:31:24.053          }
00:31:24.053        ],
00:31:24.053        "read-only": true
00:31:24.053      },
00:31:24.053      {
00:31:24.053        "name": "verbose_mode",
00:31:24.053        "value": true,
00:31:24.053        "unit": "",
00:31:24.053        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:31:24.053      },
00:31:24.053      {
00:31:24.053        "name": "prep_upgrade_on_shutdown",
00:31:24.053        "value": false,
00:31:24.053        "unit": "",
00:31:24.053        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:31:24.053      }
00:31:24.053    ]
00:31:24.053  }
00:31:24.053    17:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties
00:31:24.053    17:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:24.053    17:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length'
00:31:24.314   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0
00:31:24.314   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]]
00:31:24.314    17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties
00:31:24.314    17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:24.314    17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length'
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]]
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum
00:31:24.575  Validate MD5 checksum, iteration 1
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 ))
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1'
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:24.575   17:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:24.575  [2024-12-09 17:20:47.446719] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:24.575  [2024-12-09 17:20:47.446838] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85526 ]
00:31:24.576  [2024-12-09 17:20:47.607543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:24.837  [2024-12-09 17:20:47.718929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:26.223  
[2024-12-09T17:20:50.207Z] Copying: 559/1024 [MB] (559 MBps)
[2024-12-09T17:20:51.593Z] Copying: 1024/1024 [MB] (average 534 MBps)
00:31:28.552  
00:31:28.813   17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024
00:31:28.813   17:20:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:30.781    17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:31:30.781  Validate MD5 checksum, iteration 2
00:31:30.781   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dea1066b344617e97337020abd7f5e33
00:31:30.781   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dea1066b344617e97337020abd7f5e33 != \d\e\a\1\0\6\6\b\3\4\4\6\1\7\e\9\7\3\3\7\0\2\0\a\b\d\7\f\5\e\3\3 ]]
00:31:30.781   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2'
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:30.782   17:20:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:31:31.042  [2024-12-09 17:20:53.861054] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:31.042  [2024-12-09 17:20:53.861179] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85594 ]
00:31:31.042  [2024-12-09 17:20:54.023494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:31.302  [2024-12-09 17:20:54.123921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:32.686  
[2024-12-09T17:20:56.670Z] Copying: 580/1024 [MB] (580 MBps)
[2024-12-09T17:20:58.058Z] Copying: 1024/1024 [MB] (average 538 MBps)
00:31:35.017  
00:31:35.017   17:20:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048
00:31:35.017   17:20:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:37.558    17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd290a1bc49d464960ccc4901c62a5b0
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd290a1bc49d464960ccc4901c62a5b0 != \f\d\2\9\0\a\1\b\c\4\9\d\4\6\4\9\6\0\c\c\c\4\9\0\1\c\6\2\a\5\b\0 ]]
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85434 ]]
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85434
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85662
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85662
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85662 ']'
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:37.558  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:37.558   17:21:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:37.558  [2024-12-09 17:21:00.243073] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:37.558  [2024-12-09 17:21:00.243211] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85662 ]
00:31:37.558  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85434 Killed                  $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg"
00:31:37.558  [2024-12-09 17:21:00.401304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:37.558  [2024-12-09 17:21:00.497123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:38.128  [2024-12-09 17:21:01.133972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:31:38.128  [2024-12-09 17:21:01.134037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:31:38.389  [2024-12-09 17:21:01.279070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.389  [2024-12-09 17:21:01.279114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:31:38.389  [2024-12-09 17:21:01.279128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:31:38.389  [2024-12-09 17:21:01.279137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.389  [2024-12-09 17:21:01.279194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.279205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:38.390  [2024-12-09 17:21:01.279214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.038 ms
00:31:38.390  [2024-12-09 17:21:01.279222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.279247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:31:38.390  [2024-12-09 17:21:01.279941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:31:38.390  [2024-12-09 17:21:01.279959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.279966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:38.390  [2024-12-09 17:21:01.279975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.720 ms
00:31:38.390  [2024-12-09 17:21:01.279982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.280248] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0
00:31:38.390  [2024-12-09 17:21:01.297628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.297665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Load super block
00:31:38.390  [2024-12-09 17:21:01.297678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.381 ms
00:31:38.390  [2024-12-09 17:21:01.297686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.307125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.307250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Validate super block
00:31:38.390  [2024-12-09 17:21:01.307305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.041 ms
00:31:38.390  [2024-12-09 17:21:01.307329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.307661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.307693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:38.390  [2024-12-09 17:21:01.307766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.242 ms
00:31:38.390  [2024-12-09 17:21:01.307789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.307868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.307880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:38.390  [2024-12-09 17:21:01.307889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.046 ms
00:31:38.390  [2024-12-09 17:21:01.307896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.307923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.307932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:31:38.390  [2024-12-09 17:21:01.307940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.008 ms
00:31:38.390  [2024-12-09 17:21:01.307947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.307967] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:31:38.390  [2024-12-09 17:21:01.310861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.310890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:38.390  [2024-12-09 17:21:01.310900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2.898 ms
00:31:38.390  [2024-12-09 17:21:01.310908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.310942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.310951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:31:38.390  [2024-12-09 17:21:01.310959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:31:38.390  [2024-12-09 17:21:01.310967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.310986] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0
00:31:38.390  [2024-12-09 17:21:01.311007] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes
00:31:38.390  [2024-12-09 17:21:01.311042] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes
00:31:38.390  [2024-12-09 17:21:01.311060] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes
00:31:38.390  [2024-12-09 17:21:01.311166] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:31:38.390  [2024-12-09 17:21:01.311176] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:31:38.390  [2024-12-09 17:21:01.311186] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:31:38.390  [2024-12-09 17:21:01.311195] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:31:38.390  [2024-12-09 17:21:01.311220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:31:38.390  [2024-12-09 17:21:01.311227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:31:38.390  [2024-12-09 17:21:01.311235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:31:38.390  [2024-12-09 17:21:01.311245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.311253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:31:38.390  [2024-12-09 17:21:01.311261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.261 ms
00:31:38.390  [2024-12-09 17:21:01.311268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.311352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.390  [2024-12-09 17:21:01.311361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:31:38.390  [2024-12-09 17:21:01.311368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.069 ms
00:31:38.390  [2024-12-09 17:21:01.311375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.390  [2024-12-09 17:21:01.311489] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:31:38.390  [2024-12-09 17:21:01.311502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:31:38.390  [2024-12-09 17:21:01.311511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:31:38.390  [2024-12-09 17:21:01.311534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:31:38.390  [2024-12-09 17:21:01.311547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:31:38.390  [2024-12-09 17:21:01.311554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:31:38.390  [2024-12-09 17:21:01.311560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:31:38.390  [2024-12-09 17:21:01.311574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:31:38.390  [2024-12-09 17:21:01.311580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:31:38.390  [2024-12-09 17:21:01.311594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:31:38.390  [2024-12-09 17:21:01.311600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:31:38.390  [2024-12-09 17:21:01.311617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:31:38.390  [2024-12-09 17:21:01.311624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:31:38.390  [2024-12-09 17:21:01.311639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:31:38.390  [2024-12-09 17:21:01.311652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:31:38.390  [2024-12-09 17:21:01.311665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:31:38.390  [2024-12-09 17:21:01.311671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:31:38.390  [2024-12-09 17:21:01.311685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:31:38.390  [2024-12-09 17:21:01.311691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:31:38.390  [2024-12-09 17:21:01.311704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:31:38.390  [2024-12-09 17:21:01.311710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:31:38.390  [2024-12-09 17:21:01.311723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:31:38.390  [2024-12-09 17:21:01.311729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:31:38.390  [2024-12-09 17:21:01.311742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:31:38.390  [2024-12-09 17:21:01.311761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:31:38.390  [2024-12-09 17:21:01.311780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:31:38.390  [2024-12-09 17:21:01.311787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.390  [2024-12-09 17:21:01.311793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:31:38.390  [2024-12-09 17:21:01.311800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:31:38.390  [2024-12-09 17:21:01.311807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:38.390  [2024-12-09 17:21:01.311814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:38.391  [2024-12-09 17:21:01.311821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:31:38.391  [2024-12-09 17:21:01.311828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:31:38.391  [2024-12-09 17:21:01.311836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:31:38.391  [2024-12-09 17:21:01.311855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:31:38.391  [2024-12-09 17:21:01.311863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:31:38.391  [2024-12-09 17:21:01.311870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:31:38.391  [2024-12-09 17:21:01.311879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:31:38.391  [2024-12-09 17:21:01.311888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:31:38.391  [2024-12-09 17:21:01.311904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:31:38.391  [2024-12-09 17:21:01.311925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:31:38.391  [2024-12-09 17:21:01.311933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:31:38.391  [2024-12-09 17:21:01.311940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:31:38.391  [2024-12-09 17:21:01.311948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.311992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:31:38.391  [2024-12-09 17:21:01.311999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:31:38.391  [2024-12-09 17:21:01.312007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.312018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:31:38.391  [2024-12-09 17:21:01.312026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:31:38.391  [2024-12-09 17:21:01.312033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:31:38.391  [2024-12-09 17:21:01.312040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:31:38.391  [2024-12-09 17:21:01.312048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.312055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:31:38.391  [2024-12-09 17:21:01.312062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.628 ms
00:31:38.391  [2024-12-09 17:21:01.312069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.338746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.338778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:38.391  [2024-12-09 17:21:01.338788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 26.629 ms
00:31:38.391  [2024-12-09 17:21:01.338796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.338836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.338865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:31:38.391  [2024-12-09 17:21:01.338874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.015 ms
00:31:38.391  [2024-12-09 17:21:01.338881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.371934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.371964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:38.391  [2024-12-09 17:21:01.371975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.002 ms
00:31:38.391  [2024-12-09 17:21:01.371982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.372017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.372025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:38.391  [2024-12-09 17:21:01.372034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:38.391  [2024-12-09 17:21:01.372045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.372144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.372154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:38.391  [2024-12-09 17:21:01.372163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.039 ms
00:31:38.391  [2024-12-09 17:21:01.372172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.372216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.372225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:38.391  [2024-12-09 17:21:01.372233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.027 ms
00:31:38.391  [2024-12-09 17:21:01.372241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.388336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.388374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:38.391  [2024-12-09 17:21:01.388384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 16.069 ms
00:31:38.391  [2024-12-09 17:21:01.388394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.388494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.388505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize recovery
00:31:38.391  [2024-12-09 17:21:01.388514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:38.391  [2024-12-09 17:21:01.388522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.415917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.416052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover band state
00:31:38.391  [2024-12-09 17:21:01.416071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 27.376 ms
00:31:38.391  [2024-12-09 17:21:01.416080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.391  [2024-12-09 17:21:01.425677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.391  [2024-12-09 17:21:01.425709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:31:38.391  [2024-12-09 17:21:01.425728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.521 ms
00:31:38.391  [2024-12-09 17:21:01.425736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.653  [2024-12-09 17:21:01.485862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.653  [2024-12-09 17:21:01.485915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore P2L checkpoints
00:31:38.653  [2024-12-09 17:21:01.485929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 60.053 ms
00:31:38.653  [2024-12-09 17:21:01.485938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.653  [2024-12-09 17:21:01.486101] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8
00:31:38.653  [2024-12-09 17:21:01.486232] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9
00:31:38.653  [2024-12-09 17:21:01.486352] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12
00:31:38.653  [2024-12-09 17:21:01.486475] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0
00:31:38.653  [2024-12-09 17:21:01.486486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.653  [2024-12-09 17:21:01.486495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Preprocess P2L checkpoints
00:31:38.653  [2024-12-09 17:21:01.486505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.500 ms
00:31:38.653  [2024-12-09 17:21:01.486514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.653  [2024-12-09 17:21:01.486591] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L
00:31:38.654  [2024-12-09 17:21:01.486604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.654  [2024-12-09 17:21:01.486615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover open bands P2L
00:31:38.654  [2024-12-09 17:21:01.486625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.014 ms
00:31:38.654  [2024-12-09 17:21:01.486633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.654  [2024-12-09 17:21:01.502660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.654  [2024-12-09 17:21:01.502812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover chunk state
00:31:38.654  [2024-12-09 17:21:01.502830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 16.003 ms
00:31:38.654  [2024-12-09 17:21:01.502840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.654  [2024-12-09 17:21:01.511497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.654  [2024-12-09 17:21:01.511608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover max seq ID
00:31:38.654  [2024-12-09 17:21:01.511659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.012 ms
00:31:38.654  [2024-12-09 17:21:01.511683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:38.654  [2024-12-09 17:21:01.511797] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14
00:31:38.654  [2024-12-09 17:21:01.512053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:38.654  [2024-12-09 17:21:01.512087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, prepare
00:31:38.654  [2024-12-09 17:21:01.512109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.257 ms
00:31:38.654  [2024-12-09 17:21:01.512128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:39.599  [2024-12-09 17:21:02.488805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:39.599  [2024-12-09 17:21:02.489077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, read vss
00:31:39.599  [2024-12-09 17:21:02.489265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 975.654 ms
00:31:39.599  [2024-12-09 17:21:02.489297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:39.599  [2024-12-09 17:21:02.494458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:39.599  [2024-12-09 17:21:02.494635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, persist P2L map
00:31:39.599  [2024-12-09 17:21:02.494710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.697 ms
00:31:39.599  [2024-12-09 17:21:02.494736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:39.599  [2024-12-09 17:21:02.495451] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14
00:31:39.599  [2024-12-09 17:21:02.495625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:39.599  [2024-12-09 17:21:02.495695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, close chunk
00:31:39.599  [2024-12-09 17:21:02.495722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.829 ms
00:31:39.599  [2024-12-09 17:21:02.495743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:39.599  [2024-12-09 17:21:02.495876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:39.599  [2024-12-09 17:21:02.495909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, cleanup
00:31:39.599  [2024-12-09 17:21:02.495981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.007 ms
00:31:39.599  [2024-12-09 17:21:02.496015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:39.599  [2024-12-09 17:21:02.496075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 984.271 ms, result 0
00:31:39.599  [2024-12-09 17:21:02.496194] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15
00:31:39.599  [2024-12-09 17:21:02.496524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:39.599  [2024-12-09 17:21:02.496647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, prepare
00:31:39.599  [2024-12-09 17:21:02.496678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.331 ms
00:31:39.599  [2024-12-09 17:21:02.496698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.118413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.118674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, read vss
00:31:40.169  [2024-12-09 17:21:03.118763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 620.408 ms
00:31:40.169  [2024-12-09 17:21:03.118788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.122863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.122979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, persist P2L map
00:31:40.169  [2024-12-09 17:21:03.123043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.861 ms
00:31:40.169  [2024-12-09 17:21:03.123068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.123877] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15
00:31:40.169  [2024-12-09 17:21:03.123991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.124039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, close chunk
00:31:40.169  [2024-12-09 17:21:03.124062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.879 ms
00:31:40.169  [2024-12-09 17:21:03.124080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.124121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.124145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, cleanup
00:31:40.169  [2024-12-09 17:21:03.124165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:40.169  [2024-12-09 17:21:03.124183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.124248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 628.052 ms, result 0
00:31:40.169  [2024-12-09 17:21:03.124350] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2
00:31:40.169  [2024-12-09 17:21:03.124432] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully
00:31:40.169  [2024-12-09 17:21:03.124469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.124512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover open chunks P2L
00:31:40.169  [2024-12-09 17:21:03.124535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1612.694 ms
00:31:40.169  [2024-12-09 17:21:03.124554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.124599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.124627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize recovery
00:31:40.169  [2024-12-09 17:21:03.124647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:31:40.169  [2024-12-09 17:21:03.124666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.136553] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:31:40.169  [2024-12-09 17:21:03.136735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.136765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:31:40.169  [2024-12-09 17:21:03.136820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.042 ms
00:31:40.169  [2024-12-09 17:21:03.136843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.169  [2024-12-09 17:21:03.137549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.169  [2024-12-09 17:21:03.137637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore L2P from shared memory
00:31:40.169  [2024-12-09 17:21:03.137692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.612 ms
00:31:40.170  [2024-12-09 17:21:03.137714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.139953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.170  [2024-12-09 17:21:03.140032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore valid maps counters
00:31:40.170  [2024-12-09 17:21:03.140078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2.210 ms
00:31:40.170  [2024-12-09 17:21:03.140100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.140180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.170  [2024-12-09 17:21:03.140210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Complete trim transaction
00:31:40.170  [2024-12-09 17:21:03.140254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:40.170  [2024-12-09 17:21:03.140310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.140442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.170  [2024-12-09 17:21:03.140503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:31:40.170  [2024-12-09 17:21:03.140515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.020 ms
00:31:40.170  [2024-12-09 17:21:03.140523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.140545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.170  [2024-12-09 17:21:03.140552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:31:40.170  [2024-12-09 17:21:03.140560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:31:40.170  [2024-12-09 17:21:03.140568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.140604] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped
00:31:40.170  [2024-12-09 17:21:03.140614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.170  [2024-12-09 17:21:03.140622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Self test on startup
00:31:40.170  [2024-12-09 17:21:03.140629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.011 ms
00:31:40.170  [2024-12-09 17:21:03.140636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.140684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:40.170  [2024-12-09 17:21:03.140692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:31:40.170  [2024-12-09 17:21:03.140700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.032 ms
00:31:40.170  [2024-12-09 17:21:03.140706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:40.170  [2024-12-09 17:21:03.141741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1862.229 ms, result 0
00:31:40.170  [2024-12-09 17:21:03.157237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:40.170  [2024-12-09 17:21:03.173247] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:31:40.170  [2024-12-09 17:21:03.181895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:31:40.429  Validate MD5 checksum, iteration 1
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 ))
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1'
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:40.429   17:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:40.429  [2024-12-09 17:21:03.287447] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:40.429  [2024-12-09 17:21:03.287746] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85697 ]
00:31:40.429  [2024-12-09 17:21:03.447265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:40.686  [2024-12-09 17:21:03.560239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:42.059  
[2024-12-09T17:21:05.663Z] Copying: 664/1024 [MB] (664 MBps)
[2024-12-09T17:21:07.113Z] Copying: 1024/1024 [MB] (average 667 MBps)
00:31:44.072  
00:31:44.072   17:21:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024
00:31:44.072   17:21:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:46.053    17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dea1066b344617e97337020abd7f5e33
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dea1066b344617e97337020abd7f5e33 != \d\e\a\1\0\6\6\b\3\4\4\6\1\7\e\9\7\3\3\7\0\2\0\a\b\d\7\f\5\e\3\3 ]]
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:46.053  Validate MD5 checksum, iteration 2
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2'
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:46.053   17:21:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:31:46.053  [2024-12-09 17:21:08.836257] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:46.053  [2024-12-09 17:21:08.836512] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85764 ]
00:31:46.053  [2024-12-09 17:21:08.990933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:46.053  [2024-12-09 17:21:09.079308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:47.954  
[2024-12-09T17:21:11.255Z] Copying: 670/1024 [MB] (670 MBps)
[2024-12-09T17:21:19.377Z] Copying: 1024/1024 [MB] (average 676 MBps)
00:31:56.336  
00:31:56.336   17:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048
00:31:56.336   17:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:57.272    17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd290a1bc49d464960ccc4901c62a5b0
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd290a1bc49d464960ccc4901c62a5b0 != \f\d\2\9\0\a\1\b\c\4\9\d\4\6\4\9\6\0\c\c\c\4\9\0\1\c\6\2\a\5\b\0 ]]
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85662 ]]
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85662
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85662 ']'
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85662
00:31:57.272    17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:57.272    17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85662
00:31:57.272  killing process with pid 85662
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85662'
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85662
00:31:57.272   17:21:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85662
00:31:57.839  [2024-12-09 17:21:20.843842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000
00:31:57.839  [2024-12-09 17:21:20.856202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.856239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinit core IO channel
00:31:57.839  [2024-12-09 17:21:20.856250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:57.839  [2024-12-09 17:21:20.856257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.856277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread
00:31:57.839  [2024-12-09 17:21:20.858416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.858445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Unregister IO device
00:31:57.839  [2024-12-09 17:21:20.858459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2.126 ms
00:31:57.839  [2024-12-09 17:21:20.858466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.858673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.858683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Stop core poller
00:31:57.839  [2024-12-09 17:21:20.858690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.188 ms
00:31:57.839  [2024-12-09 17:21:20.858696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.859791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.859951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist L2P
00:31:57.839  [2024-12-09 17:21:20.859964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.082 ms
00:31:57.839  [2024-12-09 17:21:20.859976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.860869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.860885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finish L2P trims
00:31:57.839  [2024-12-09 17:21:20.860893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.863 ms
00:31:57.839  [2024-12-09 17:21:20.860899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.868512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.868538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist NV cache metadata
00:31:57.839  [2024-12-09 17:21:20.868547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.582 ms
00:31:57.839  [2024-12-09 17:21:20.868558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.872762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.872788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist valid map metadata
00:31:57.839  [2024-12-09 17:21:20.872797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.174 ms
00:31:57.839  [2024-12-09 17:21:20.872804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:57.839  [2024-12-09 17:21:20.872873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:57.839  [2024-12-09 17:21:20.872881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist P2L metadata
00:31:57.839  [2024-12-09 17:21:20.872889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.051 ms
00:31:57.839  [2024-12-09 17:21:20.872899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.098  [2024-12-09 17:21:20.880243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.098  [2024-12-09 17:21:20.880275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist band info metadata
00:31:58.098  [2024-12-09 17:21:20.880283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.330 ms
00:31:58.098  [2024-12-09 17:21:20.880288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.098  [2024-12-09 17:21:20.887224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.098  [2024-12-09 17:21:20.887335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist trim metadata
00:31:58.098  [2024-12-09 17:21:20.887347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 6.902 ms
00:31:58.098  [2024-12-09 17:21:20.887353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.098  [2024-12-09 17:21:20.894431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.098  [2024-12-09 17:21:20.894529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist superblock
00:31:58.098  [2024-12-09 17:21:20.894541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.051 ms
00:31:58.098  [2024-12-09 17:21:20.894548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.098  [2024-12-09 17:21:20.901477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.098  [2024-12-09 17:21:20.901575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL clean state
00:31:58.098  [2024-12-09 17:21:20.901586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 6.881 ms
00:31:58.098  [2024-12-09 17:21:20.901592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.901616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity:
00:31:58.099  [2024-12-09 17:21:20.901629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:58.099  [2024-12-09 17:21:20.901637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   2:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:58.099  [2024-12-09 17:21:20.901643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   3:     2048 / 261120 	wr_cnt: 1	state: closed
00:31:58.099  [2024-12-09 17:21:20.901649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:31:58.099  [2024-12-09 17:21:20.901739] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 
00:31:58.099  [2024-12-09 17:21:20.901745] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID:         b9c618aa-b25b-4f88-a58e-97446d043b7d
00:31:58.099  [2024-12-09 17:21:20.901752] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs:    524288
00:31:58.099  [2024-12-09 17:21:20.901758] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes:        320
00:31:58.099  [2024-12-09 17:21:20.901763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes:         0
00:31:58.099  [2024-12-09 17:21:20.901769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF:                 inf
00:31:58.099  [2024-12-09 17:21:20.901775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits:
00:31:58.099  [2024-12-09 17:21:20.901781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   crit: 0
00:31:58.099  [2024-12-09 17:21:20.901790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   high: 0
00:31:58.099  [2024-12-09 17:21:20.901795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]    low: 0
00:31:58.099  [2024-12-09 17:21:20.901800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]  start: 0
00:31:58.099  [2024-12-09 17:21:20.901807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.099  [2024-12-09 17:21:20.901814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Dump statistics
00:31:58.099  [2024-12-09 17:21:20.901822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.192 ms
00:31:58.099  [2024-12-09 17:21:20.901829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.911708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.099  [2024-12-09 17:21:20.911731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize L2P
00:31:58.099  [2024-12-09 17:21:20.911740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 9.864 ms
00:31:58.099  [2024-12-09 17:21:20.911746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.912056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:58.099  [2024-12-09 17:21:20.912070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize P2L checkpointing
00:31:58.099  [2024-12-09 17:21:20.912077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.289 ms
00:31:58.099  [2024-12-09 17:21:20.912082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.947014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:20.947044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:58.099  [2024-12-09 17:21:20.947054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:20.947061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.947091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:20.947099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:58.099  [2024-12-09 17:21:20.947106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:20.947113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.947181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:20.947189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:58.099  [2024-12-09 17:21:20.947196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:20.947203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:20.947219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:20.947227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:58.099  [2024-12-09 17:21:20.947233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:20.947239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.010353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.010391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:58.099  [2024-12-09 17:21:21.010401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.010407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.061756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.061795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:58.099  [2024-12-09 17:21:21.061804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.061812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.061894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.061903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:58.099  [2024-12-09 17:21:21.061911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.061917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.061973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.061989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:58.099  [2024-12-09 17:21:21.061996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.062003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.062082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.062091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:58.099  [2024-12-09 17:21:21.062097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.062104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.062131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.062138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize superblock
00:31:58.099  [2024-12-09 17:21:21.062148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.062155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.062188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.062195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:58.099  [2024-12-09 17:21:21.062202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.062208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.062249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:58.099  [2024-12-09 17:21:21.062259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:58.099  [2024-12-09 17:21:21.062266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:58.099  [2024-12-09 17:21:21.062272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:58.099  [2024-12-09 17:21:21.062380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 206.150 ms, result 0
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]]
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:31:59.036  Remove shared memory files
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85434
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f
00:31:59.036  ************************************
00:31:59.036  END TEST ftl_upgrade_shutdown
00:31:59.036  ************************************
00:31:59.036  
00:31:59.036  real	1m28.732s
00:31:59.036  user	2m1.396s
00:31:59.036  sys	0m20.154s
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:59.036   17:21:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:59.036  Process with pid 76374 is not found
00:31:59.036  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]]
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@14 -- # killprocess 76374
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@954 -- # '[' -z 76374 ']'
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@958 -- # kill -0 76374
00:31:59.036  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76374) - No such process
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76374 is not found'
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]]
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85934
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85934
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@835 -- # '[' -z 85934 ']'
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:59.036   17:21:21 ftl -- common/autotest_common.sh@10 -- # set +x
00:31:59.036   17:21:21 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:31:59.036  [2024-12-09 17:21:21.864960] Starting SPDK v25.01-pre git sha1 9237e57ed / DPDK 24.03.0 initialization...
00:31:59.036  [2024-12-09 17:21:21.865063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85934 ]
00:31:59.036  [2024-12-09 17:21:22.015455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:59.295  [2024-12-09 17:21:22.102985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:59.866   17:21:22 ftl -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:59.866   17:21:22 ftl -- common/autotest_common.sh@868 -- # return 0
00:31:59.866   17:21:22 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:32:00.125  nvme0n1
00:32:00.125   17:21:22 ftl -- ftl/ftl.sh@22 -- # clear_lvols
00:32:00.125    17:21:22 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:32:00.125    17:21:22 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:32:00.384   17:21:23 ftl -- ftl/common.sh@28 -- # stores=d7521178-9514-4bcb-8b6b-0f1c79d118a5
00:32:00.384   17:21:23 ftl -- ftl/common.sh@29 -- # for lvs in $stores
00:32:00.384   17:21:23 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d7521178-9514-4bcb-8b6b-0f1c79d118a5
00:32:00.384   17:21:23 ftl -- ftl/ftl.sh@23 -- # killprocess 85934
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 85934 ']'
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@958 -- # kill -0 85934
00:32:00.384    17:21:23 ftl -- common/autotest_common.sh@959 -- # uname
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:00.384    17:21:23 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85934
00:32:00.384  killing process with pid 85934
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85934'
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@973 -- # kill 85934
00:32:00.384   17:21:23 ftl -- common/autotest_common.sh@978 -- # wait 85934
00:32:01.758   17:21:24 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:32:01.758  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:32:02.017  Waiting for block devices as requested
00:32:02.017  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:32:02.017  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:32:02.017  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:32:02.276  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:32:07.602  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:32:07.602   17:21:30 ftl -- ftl/ftl.sh@28 -- # remove_shm
00:32:07.602  Remove shared memory files
00:32:07.602  ************************************
00:32:07.602  END TEST ftl
00:32:07.602  ************************************
00:32:07.602   17:21:30 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files
00:32:07.602   17:21:30 ftl -- ftl/common.sh@205 -- # rm -f rm -f
00:32:07.602   17:21:30 ftl -- ftl/common.sh@206 -- # rm -f rm -f
00:32:07.602   17:21:30 ftl -- ftl/common.sh@207 -- # rm -f rm -f
00:32:07.602   17:21:30 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:32:07.602   17:21:30 ftl -- ftl/common.sh@209 -- # rm -f rm -f
00:32:07.602  
00:32:07.602  real	14m13.732s
00:32:07.602  user	16m34.413s
00:32:07.602  sys	1m19.715s
00:32:07.602   17:21:30 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:07.602   17:21:30 ftl -- common/autotest_common.sh@10 -- # set +x
00:32:07.602   17:21:30  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:32:07.602   17:21:30  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:32:07.602   17:21:30  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:32:07.602   17:21:30  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:32:07.602   17:21:30  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:32:07.602   17:21:30  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:32:07.602   17:21:30  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:32:07.602   17:21:30  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:32:07.602   17:21:30  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:32:07.602   17:21:30  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:32:07.602   17:21:30  -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:07.602   17:21:30  -- common/autotest_common.sh@10 -- # set +x
00:32:07.602   17:21:30  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:32:07.602   17:21:30  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:32:07.602   17:21:30  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:32:07.602   17:21:30  -- common/autotest_common.sh@10 -- # set +x
00:32:09.008  INFO: APP EXITING
00:32:09.008  INFO: killing all VMs
00:32:09.008  INFO: killing vhost app
00:32:09.008  INFO: EXIT DONE
00:32:09.008  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:32:09.583  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:32:09.583  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:32:09.583  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:32:09.583  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:32:09.843  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:32:10.414  Cleaning
00:32:10.414  Removing:    /var/run/dpdk/spdk0/config
00:32:10.414  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:32:10.414  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:32:10.414  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:32:10.414  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:32:10.414  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:32:10.414  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:32:10.414  Removing:    /var/run/dpdk/spdk0
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58186
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58383
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58595
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58688
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58722
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58845
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid58863
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59051
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59143
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59233
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59344
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59436
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59475
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59512
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59582
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid59672
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60097
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60161
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60224
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60240
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60337
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60347
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60460
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60476
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60529
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60547
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60606
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60624
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60790
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60827
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid60911
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61089
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61167
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61204
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61638
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61738
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61850
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61903
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid61929
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid62007
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid62635
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid62666
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid63137
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid63235
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid63352
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid63405
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid63425
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid63456
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65291
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65417
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65421
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65444
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65483
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65487
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65499
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65544
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65548
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65560
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65605
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65609
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid65621
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid67007
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid67104
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid68517
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70276
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70350
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70432
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70544
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70636
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70741
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70815
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid70897
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71001
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71095
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71197
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71277
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71352
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71462
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71554
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71649
00:32:10.414  Removing:    /var/run/dpdk/spdk_pid71718
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid71793
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid71903
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid71995
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72096
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72160
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72240
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72315
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72390
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72499
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72590
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72690
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72766
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72839
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72919
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid72989
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid73098
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid73194
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid73338
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid73622
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid73666
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74115
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74304
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74406
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74528
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74576
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74604
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74911
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid74960
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid75041
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid75434
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid75574
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid76374
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid76506
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid76681
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid76773
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid77092
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid77383
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid77725
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid77908
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78099
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78158
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78355
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78386
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78445
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78710
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid78950
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid79695
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid80535
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid81145
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid82082
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid82228
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid82321
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid82811
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid82865
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid83448
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid84067
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid84867
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid84989
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85032
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85102
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85162
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85216
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85434
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85526
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85594
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85662
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85697
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85764
00:32:10.677  Removing:    /var/run/dpdk/spdk_pid85934
00:32:10.677  Clean
00:32:10.677   17:21:33  -- common/autotest_common.sh@1453 -- # return 0
00:32:10.677   17:21:33  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:32:10.677   17:21:33  -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:10.677   17:21:33  -- common/autotest_common.sh@10 -- # set +x
00:32:10.939   17:21:33  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:32:10.939   17:21:33  -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:10.939   17:21:33  -- common/autotest_common.sh@10 -- # set +x
00:32:10.939   17:21:33  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:32:10.939   17:21:33  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:32:10.939   17:21:33  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:32:10.939   17:21:33  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:32:10.939    17:21:33  -- spdk/autotest.sh@398 -- # hostname
00:32:10.939   17:21:33  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:32:11.202  geninfo: WARNING: invalid characters removed from testname!
00:32:37.795   17:21:58  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:39.704   17:22:02  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:42.234   17:22:04  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:43.607   17:22:06  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:46.147   17:22:08  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:48.049   17:22:10  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:32:49.952   17:22:12  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:32:49.952   17:22:12  -- spdk/autorun.sh@1 -- $ timing_finish
00:32:49.952   17:22:12  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:32:49.952   17:22:12  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:32:49.952   17:22:12  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:32:49.952   17:22:12  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:32:49.952  + [[ -n 5028 ]]
00:32:49.952  + sudo kill 5028
00:32:49.962  [Pipeline] }
00:32:49.978  [Pipeline] // timeout
00:32:49.984  [Pipeline] }
00:32:49.998  [Pipeline] // stage
00:32:50.004  [Pipeline] }
00:32:50.018  [Pipeline] // catchError
00:32:50.027  [Pipeline] stage
00:32:50.030  [Pipeline] { (Stop VM)
00:32:50.042  [Pipeline] sh
00:32:50.321  + vagrant halt
00:32:52.854  ==> default: Halting domain...
00:32:59.503  [Pipeline] sh
00:32:59.786  + vagrant destroy -f
00:33:02.334  ==> default: Removing domain...
00:33:02.922  [Pipeline] sh
00:33:03.207  + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output
00:33:03.218  [Pipeline] }
00:33:03.232  [Pipeline] // stage
00:33:03.237  [Pipeline] }
00:33:03.251  [Pipeline] // dir
00:33:03.256  [Pipeline] }
00:33:03.270  [Pipeline] // wrap
00:33:03.276  [Pipeline] }
00:33:03.288  [Pipeline] // catchError
00:33:03.298  [Pipeline] stage
00:33:03.300  [Pipeline] { (Epilogue)
00:33:03.313  [Pipeline] sh
00:33:03.600  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:33:08.887  [Pipeline] catchError
00:33:08.889  [Pipeline] {
00:33:08.901  [Pipeline] sh
00:33:09.188  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:33:09.188  Artifacts sizes are good
00:33:09.198  [Pipeline] }
00:33:09.212  [Pipeline] // catchError
00:33:09.222  [Pipeline] archiveArtifacts
00:33:09.229  Archiving artifacts
00:33:09.351  [Pipeline] cleanWs
00:33:09.365  [WS-CLEANUP] Deleting project workspace...
00:33:09.365  [WS-CLEANUP] Deferred wipeout is used...
00:33:09.389  [WS-CLEANUP] done
00:33:09.390  [Pipeline] }
00:33:09.405  [Pipeline] // stage
00:33:09.410  [Pipeline] }
00:33:09.424  [Pipeline] // node
00:33:09.429  [Pipeline] End of Pipeline
00:33:09.466  Finished: SUCCESS